{"id":1644,"date":"2019-05-15T18:30:19","date_gmt":"2019-05-15T22:30:19","guid":{"rendered":"https:\/\/www.techaidemontreal.org\/?p=1644"},"modified":"2021-08-31T11:02:53","modified_gmt":"2021-08-31T15:02:53","slug":"5-ai-developments-that-will-impact-your-startup","status":"publish","type":"post","link":"https:\/\/www.techaidemontreal.org\/en\/5-ai-developments-that-will-impact-your-startup\/","title":{"rendered":"5 AI Developments That Will Impact Your Startup"},"content":{"rendered":"<p><em>THIS ARTICLE WAS ORIGINALLY PUBLISHED ON THE REAL VENTURES <\/em><a href=\"https:\/\/medium.com\/believing\/5-ai-developments-that-will-impact-your-startup-efe84efaacfd\" target=\"_blank\" rel=\"noopener\"><em>BELIEVING BLOG<\/em><\/a><em> ON MAY 9, 2019<\/em>.<\/p>\n<p><strong>Takeaways from the 2019 TechAide AI Conference<\/strong><\/p>\n<p>While even a few years ago, artificial intelligence seemed fringe and cutting-edge, it\u2019s already at the core of many of the technologies we use and will only become more integral to the tools and processes in every business and industry. Because of the stunning ability of algorithms to help us parse information and optimize processes, AI has the potential to democratize access to services from legal advice to healthcare to finance. For startups, in particular, keeping up with the new trends and research in AI will make the difference between a good idea and a world-altering product.<\/p>\n<p>In this article, we have summarized some of the key takeaways from the\u00a0<a href=\"https:\/\/www.techaidemontreal.org\/ai-conference\" target=\"_blank\" rel=\"noopener\">TechAide AI conference<\/a>\u00a0led by Google Brain\u2019s Hugo Larochelle on April 26, 2019. And while predicting the impact this research will have on startups is a bit like fortune-telling\u200a\u2014\u200asomething machine learning (ML) practitioners can appreciate\u200a\u2014\u200atomorrow\u2019s leader will not only be influenced by these latest findings but also inspired by the overarching theme of the conference: giving back.<\/p>\n<p>Here\u2019s a fast-take on the day\u2019s output\u2026<\/p>\n<p><strong>Causation is Not Correlation<\/strong><\/p>\n<p><strong>YOSHUA BENGIO, U DE M, MILA,\u00a0IVADO<\/strong><\/p>\n<blockquote><p>\u201cWe don\u2019t have systems that learn sufficiently rich understandings of the world\u2026 that even a two-year-old has in terms of physics or psychology.\u201d<\/p><\/blockquote>\n<p><em>\u2014 Yoshua Bengio on \u201c<\/em><a href=\"https:\/\/arxiv.org\/abs\/1901.10912\" target=\"_blank\" rel=\"noopener\"><em>A Meta-Transfer Objective for Learning to Disentangle Causal Mechanisms<\/em><\/a><em>\u201d or common sense in ML<\/em><\/p>\n<p><strong>THE GIST:<\/strong><\/p>\n<p>Bengio\u2019s team\u2019s insight was to look at how quickly a learner adapts to small changes in the frequencies of the objects it \u201cencounters\u201d in their distribution, while learning. By enabling a kind of \u201csquirreling away\u201d of some learned details, the agent can use that information later to \u201cremind\u201d itself of a past similar experience.<\/p>\n<p>Running their \u201ctoy\u201d two-dimensional experiment, tweaking the data in small ways, they were able to separate out the correct causal structure of the variables in the data. A lot of work is yet to be done, says Bengio, in developing this approach, particularly in the scaling of the experimental structures explored.<\/p>\n<p><strong>THE TAKEAWAY:<\/strong><\/p>\n<p>This\u00a0<a href=\"https:\/\/arxiv.org\/abs\/1901.10912\" target=\"_blank\" rel=\"noopener\">research<\/a>\u00a0is defining the direction of the field, trying to relax what is usually considered a rigid assumption, to find a potent tool for discovery: causality. By accepting dependence but carefully scraping it away, the team encapsulates the pertinent knowledge to piece together the scaffold of causal relationships hidden in the data. Typically, in ML you have a dataset representing some variables, and then you massage, manipulate and transform it until your system produces the optimal desired result on novel data. But as we all know, language is interdependent\u200a\u2014\u200alife is messy. If we can find a way to extract causal relationships in a robust way, just imagine the kinds of questions we can begin to answer about causal relationships.<\/p>\n<p><strong>THE POTENTIAL IMPACT:<\/strong><\/p>\n<p>World-altering. This is the type of artificial intelligence that the field\u2019s pioneers have been working toward for decades and could help take us from the \u201csoft\u201d AI that we currently have to more robust deep learning models and artificial neural networks that are able to truly learn from past exposure.<\/p>\n<p><strong>Learning With Less\u00a0Effort<\/strong><\/p>\n<p><strong>NEGAR ROSTAMZADEH, ELEMENT\u00a0AI<\/strong><\/p>\n<blockquote><p>\u201cIt takes up to an hour for each image. For training a segmentation network, you need to have thousands of images. Wouldn\u2019t it be great to learn a task with less annotation? [\u2026] Also, we have access to huge [amounts] of data which are not necessarily annotated for the needs we have. Wouldn\u2019t it be great [to use] that information?\u201d<\/p><\/blockquote>\n<p><em>\u2014 Negar Rostamzadeh on\u00a0<\/em><a href=\"https:\/\/arxiv.org\/abs\/1807.09856\" target=\"_blank\" rel=\"noopener\"><em>Learning with Less Labeling Effort<\/em><\/a><em>, @TechAIDE 2019<\/em><\/p>\n<p><strong>THE GIST:<\/strong><\/p>\n<p>If you have any experience with datasets, you know that labelling takes a\u00a0<em>long<\/em>time. And computer vision uses\u00a0<em>lots<\/em>\u00a0of images. Finding smarter ways to label these images wouldn\u2019t just be nice, it would have the potential to transform the field.<\/p>\n<p>To conquer the drudgery (and ultimately cost) that is pixel-wise labelling, and segmenting images, Dr. Rostamzadeh and her colleagues devised novel point-level methods, setting new benchmarks for essential tasks. What\u2019s more, by cunningly leveraging pre-labelled data from the Internet, she and her team showed that the method has legs.<\/p>\n<p><strong>THE TAKEAWAY:<\/strong><\/p>\n<p>The research defines a scalable technique that gets results. Time being measured in dollars, such tools will certainly cut costs. Further, by removing the time-consuming drudgery involved with labelling, humans, who are typically far more creative and ambitious than computers, will be freed up to do more interesting and analytical work.<\/p>\n<p><strong>THE POTENTIAL IMPACT:<\/strong><\/p>\n<p>Time lost is irreplaceable. Agile, Lean, or whatever your M.O., this kind of new efficiency has the potential to reduce major technical debt down the road. Further, lower costs in computer vision can translate into savings for consumers of products using this technology, with impacts from healthcare to transportation.<\/p>\n<p><strong>Context is Everything<\/strong><\/p>\n<p><strong>JAMIE KIROS, GOOGLE BRAIN\u00a0TORONTO<\/strong><\/p>\n<blockquote><p>\u201cA really common theme that I want you to remember in this talk is that meaning is not in language. The language indicates the meaning. [\u2026] A lot of our current methods that we\u2019re using now assumes [the former] and ignores the more indicative components. It turns out by [not ignoring them] we can actually go a long\u00a0way.\u201d<\/p><\/blockquote>\n<p><em>\u2014 Jamie Kiros on \u201cGrounding and Structure in Natural Language Processing\u201d,<br \/>\n@TechAIDE 2019<\/em><\/p>\n<p><strong>THE GIST:<\/strong><\/p>\n<p>Natural Language Processing (NLP) is hard. As Kiros walked the audience through her\u00a0<a href=\"https:\/\/ai.google\/research\/pubs\/pub47099\" target=\"_blank\" rel=\"noopener\">work<\/a>, she noted that \u201c<em>context is everything.<\/em>\u201d It\u2019s what grounds the meaning of what is being communicated. It enables structures to be inferred. So why do researchers ignore it in most cases of Machine Learning? It\u2019s because it\u2019s\u00a0<em>that<\/em>\u00a0hard.<\/p>\n<p>\u201cGeneralized Machine Translation (GMT)\u201d refers to the set of ML problems that map a translation task to any other form or modality. Think: the usual translation between human languages, captioning images, interfacing with machines, and creating meaningful utterances. Many of these tasks go beyond the familiar (have you tried<a href=\"https:\/\/news.ycombinator.com\/item?id=1952356\" target=\"_blank\" rel=\"noopener\">\u00a0beatboxing with Google Translate?<\/a>) but it\u2019s work like Kiros\u2019 that will get us there.<\/p>\n<p>Rather than making the language model ever-bigger, or trying to force their models to deliver what they want through strong, programmatic constraints, Kiros and her colleagues gently nudge their models in the right direction by fiddling with the representations the model learns from. Representations are key elements of machine learning and you can think of them as the way data presents itself. For example, coordinate systems should be a familiar grouping of representations. In the rectangular system (remember x-axis, y-axis), the map\u00a0<a href=\"https:\/\/www.wolframalpha.com\/input\/?i=plot+sin%28x%29\" target=\"_blank\" rel=\"noopener\"><em>f(x) = sin(x)<\/em><\/a>is the rolling pattern of rise and fall no more than one unit. If we transform to or\u00a0<em>encode<\/em>\u00a0the\u00a0<em>inputs x\u00a0<\/em>in\u00a0<em>polar<\/em>\u00a0coordinates,<a href=\"https:\/\/www.wolframalpha.com\/input\/?i=polar+plot+sin%28x%29\" target=\"_blank\" rel=\"noopener\">\u00a0it appears as a circle<\/a>. To get back, we can\u00a0<em>decode<\/em>\u00a0just as easily. So information can be\u00a0<a href=\"https:\/\/www.quora.com\/What-is-an-Encoder-Decoder-in-Deep-Learning\" target=\"_blank\" rel=\"noopener\">encoded and decoded<\/a>\u00a0in different ways, depending on desired outputs. By applying common NLP methods and changing only the encoder\/decoder networks they achieved some promising results.<\/p>\n<p><strong>THE TAKEAWAY:<\/strong><\/p>\n<p>By avoiding the context of any communication, a very rigid constraint is set on the ability for any solution to sufficiently capture the richness of language. It\u2019s hard to think about GMT without alluding to the Universal Communicator from\u00a0<em>Star Trek<\/em>. It rarely met a language it couldn\u2019t decode (<a href=\"https:\/\/en.wikipedia.org\/wiki\/Darmok\" target=\"_blank\" rel=\"noopener\">\u201cDarmok\u201d ST:TNG ep 102)<\/a>. But in that aspirational world, Roddenberry was telling us his belief that we must find a way to understand one another if we want to find peace. After all, if the objective of solving GMT is to improve communication in general, we can no longer ignore context.<\/p>\n<p><strong>THE POTENTIAL IMPACT:<\/strong><\/p>\n<p>Companies working in natural language processing already know how challenging it is to gain a clear understanding of the mechanisms of living language, as avoiding the question of context makes clear. By taking strides toward enabling systems to infer context, there could be applications from customer service to translation to education with major democratization of information and access as a possible result.<\/p>\n<p><strong>AI to Diagnose Human\u00a0Disease<\/strong><\/p>\n<p><strong>JENNIFER CHAYES, MICROSOFT, NEW\u00a0ENGLAND<\/strong><\/p>\n<blockquote><p>\u201cThe human immune system knows about the diseases that we have. How do we decode that and find out about them? How can we use the immune system to fight cancer in a way that is so much more targeted with less collateral damage than chemo and radiation and the standard methods we\u00a0use?\u201d<\/p><\/blockquote>\n<p>\u2014 Jennifer Chayes, on minimizing the patient-impact of drug trials, @TechAIDE 2019<\/p>\n<p><strong>THE GIST:<\/strong><\/p>\n<p><a href=\"https:\/\/www.adaptivebiotech.com\/\" target=\"_blank\" rel=\"noopener\">Adaptive Biotechnologies<\/a>, in conjunction with Dr. Chayes\u2019 Boston Microsoft lab, is leveraging the noble T-cell\u2019s \u201cseek, multiply, and destroy\u201d mechanism to change the face of how we diagnose and treat disease. First up, late-stage cancers and auto-immune diseases.<\/p>\n<p>Adaptive was able to demonstrate that it is\u00a0<em>possible<\/em>\u00a0to predict with up to 95 percent accuracy which patient had a particular virus by using their profiled T-cell receptors. It is a bittersweet result, however, as their system didn\u2019t generalize easily. In a new study, they will attempt to predict receptor binding energies of the 10\u00b9\u00b2 possible receptors to achieve a similar end, building on the work of\u00a0<a href=\"http:\/\/med.stanford.edu\/davislab\/Research.html\" target=\"_blank\" rel=\"noopener\">Mark Davis at Stanford.<\/a><\/p>\n<p><strong>THE TAKEAWAY:<\/strong><\/p>\n<p>Chayes believes where ML has advantages over the traditional approaches of computational biology is the very nature of the problems. The sparseness of the data (think: how \u201clocally\u201d packaged is the data) is better suited to the ML tools and techniques over current methods.<\/p>\n<p>But clinical research is costly\u200a\u2014\u200ain particular, drug trials. In cancer immunotherapy, when standard treatments aren\u2019t getting it done, designing the custom-for-you drug can run into million-dollar territory. But, as she explains, \u201cwhat if you know [a drug] is not going to work [for a patient] or have bad side effects? If you pull those people out, you can get certain drugs that never would have made it through clinical trial approved for the right people.\u201d<\/p>\n<p><strong>THE POTENTIAL IMPACT:<\/strong><\/p>\n<p>Advancing machine learning to tackle the highly specific, difficult computational problems that are at the intersection of molecular biology and computational science requires the cooperation of large institutions, researchers and startups. By working together for mutual benefit, there is the possibility of a massive payoff for society through reduced costs in treatment or even the eradication of diseases.<\/p>\n<p><strong>Machine Learning and Creativity<\/strong><\/p>\n<p><strong>PABLO SAMUEL CASTRO, GOOGLE BRAIN LAB,\u00a0MONTREAL<\/strong><\/p>\n<blockquote><p>\u201cI\u2019m really interested in ways to incorporate [machine learning] techniques into live performances, which introduces a whole new level of complexities because I play with other musicians and I can\u2019t be like \u2018Ok. Hold on. The model is doing it\u2019s inference\u2026 ok, now we can continue.\u2019 It really has to adapt to whatever is happening around it and it has to sound good because I\u2019m usually playing in front of a paying audience.\u201d<\/p><\/blockquote>\n<p>\u2014 Pablo Samuel Castro on the potential of live jamming with algorithms<br \/>\n@TechAIDE 2019<\/p>\n<p><strong>THE GIST:<\/strong><\/p>\n<p>Google has a rich history of making art with their AI projects: think\u00a0<a href=\"https:\/\/duckduckgo.com\/?q=google+ai+artworks&amp;atb=v154-1&amp;iar=images&amp;iax=images&amp;ia=images\" target=\"_blank\" rel=\"noopener\">dreamscape horrors<\/a>\u00a0meets Van Gogh or the first AI-powered\u00a0<a href=\"https:\/\/www.google.com\/doodles\/celebrating-johann-sebastian-bach\" target=\"_blank\" rel=\"noopener\">Google Doodle in celebration of JS Bach<\/a>. It is indeed, as Castro beams, \u201cdelightfully silly\u201d. A whimsical salute to the giant of Baroque music, it showcases the power of an off-the-shelf model to not just create something unique, but for Castro it was also \u201ca really creative and useful way of getting people in touch in a hands-on fashion with machine learning technologies\u201d.<\/p>\n<p><strong>THE TAKEAWAY:<\/strong><\/p>\n<p>AI isn\u2019t going to kill the Youtube star, as Castro demonstrated live, jamming with a network on-stage. It represents a new era of creative pursuit, and at the core, reminds us that human beings are creators. All art is rule-breaking of some kind. Machine learning, Castro argues, is just another tool that we can use to break those rules wide open.<\/p>\n<p><strong>THE POTENTIAL IMPACT:<\/strong><\/p>\n<p>This has the potential to break down walls for artists, enabling creators to build incredible art with the aid of inexpensive tools that formerly only the major record labels or well-funded, \u201cfamous\u201d artists had access to. Further, it\u2019s fun. The future of humanity isn\u2019t all about optimization and financial returns. Ultimately, humans will continue to produce and consume art regardless of the practicality or necessity of it, and as we know, there will always be opportunity for innovators who put smiles on people\u2019s faces.<\/p>\n<p><em>Real Ventures is proud to support\u00a0<\/em><a href=\"https:\/\/www.techaidemontreal.org\/\" target=\"_blank\" rel=\"noopener\"><em>TechAide<\/em><\/a><em>, which raised more than\u00a0<\/em><a href=\"https:\/\/twitter.com\/TechAideMTL\/status\/1122979097142071297\" target=\"_blank\" rel=\"noopener\"><em>$165,000 for Centraide of Greater Montreal<\/em><\/a><em>\u00a0through the 2019 TechAide AI conference.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>THIS ARTICLE WAS ORIGINALLY PUBLISHED ON THE REAL VENTURES BELIEVING BLOG ON MAY 9, 2019. Takeaways from the 2019 TechAide AI Conference While even a few years ago, artificial intelligence seemed fringe and cutting-edge, it\u2019s already at the core of many of the technologies we use and will only become more integral to the tools [&hellip;]<\/p>\n","protected":false},"author":11,"featured_media":1648,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_EventAllDay":false,"_EventTimezone":"","_EventStartDate":"","_EventEndDate":"","_EventStartDateUTC":"","_EventEndDateUTC":"","_EventShowMap":false,"_EventShowMapLink":false,"_EventURL":"","_EventCost":"","_EventCostDescription":"","_EventCurrencySymbol":"","_EventCurrencyCode":"","_EventCurrencyPosition":"","_EventDateTimeSeparator":"","_EventTimeRangeSeparator":"","_EventOrganizerID":[],"_EventVenueID":[],"_OrganizerEmail":"","_OrganizerPhone":"","_OrganizerWebsite":"","_VenueAddress":"","_VenueCity":"","_VenueCountry":"","_VenueProvince":"","_VenueState":"","_VenueZip":"","_VenuePhone":"","_VenueURL":"","_VenueStateProvince":"","_VenueLat":"","_VenueLng":"","_VenueShowMap":false,"_VenueShowMapLink":false,"footnotes":""},"categories":[53],"tags":[101,121,143,141,144,145,110,146,142,76,100],"class_list":["post-1644","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-event","tag-ai","tag-ai-conference","tag-element-ai","tag-ivado","tag-jamie-kiros","tag-jennifer-chayes","tag-mila","tag-pablo-samuel-castro","tag-research","tag-startup","tag-yoshua-bengio"],"_links":{"self":[{"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/posts\/1644","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/users\/11"}],"replies":[{"embeddable":true,"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/comments?post=1644"}],"version-history":[{"count":3,"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/posts\/1644\/revisions"}],"predecessor-version":[{"id":1647,"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/posts\/1644\/revisions\/1647"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/media\/1648"}],"wp:attachment":[{"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/media?parent=1644"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/categories?post=1644"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.techaidemontreal.org\/en\/wp-json\/wp\/v2\/tags?post=1644"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}