<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>AI &#8211; A musing Mulcahy</title>
	<atom:link href="https://www.amusingmulcahy.com/category/technology/ai/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.amusingmulcahy.com</link>
	<description>Management, technology, random thoughts</description>
	<lastBuildDate>Wed, 01 May 2019 18:49:08 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Pattern matching and the lost evolutionary high ground</title>
		<link>https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=pattern-matching-and-the-lost-evolutionary-high-ground</link>
					<comments>https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/#respond</comments>
		
		<dc:creator><![CDATA[Ger]]></dc:creator>
		<pubDate>Wed, 01 May 2019 18:47:26 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.amusingmulcahy.com/?p=391</guid>

					<description><![CDATA[The human brain loves patterns. We identify patterns everywhere, even where they don’t exist. In the past, this has been of evolutionary benefit. Being able to recognise poisonous tree frogs or tigers or gaps in a forest floor, or tribe members who look like us have all been helpful to our survival. The brain is … <a href="https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/" class="more-link">Continue reading<span class="screen-reader-text"> "Pattern matching and the lost evolutionary high ground"</span></a>]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-image"><figure class="alignleft is-resized"><img decoding="async" src="https://www.amusingmulcahy.com/wp-content/uploads/2019/05/patterns.png" alt="" class="wp-image-423" width="238" height="207"/></figure></div>



<p>The human brain loves patterns.  We identify patterns everywhere, even where they don&#8217;t exist.  In the past, this has been of evolutionary benefit. Being able to recognise poisonous tree frogs or tigers or gaps in a forest floor, or tribe members who look like us have all been helpful to our survival.  The brain is an expensive engine to run, and any heuristic that helps us be more efficient saves us energy. </p>


<p><span id="more-391"></span><!--more--></p>


<p>Until quite recently, we thought we were the rulers of the world of broad pattern-matching.  Sure, animals could recognise patterns for the same evolutionary reasons we could, but from the point of view of image-based pattern matching (as in Recaptchas where one has to recognise all of the images with traffic lights, for example), or being able to create relationships between language, images and sounds (the word &#8220;cat&#8221;, a picture of a cat and sound of a cat), we pretty much had the mountain peaks to ourselves. </p>



<p>We learn about patterns by experience.  Ray Dalio writes in Principles about the benefit of being able to identify &#8220;another one of those&#8221; based on the study of past events or past experiences.   This recognition of patterns aids in decision-making and provides comfort that we&#8217;ve seen this kind of problem before.</p>



<p>We sometimes misidentify patterns.  Humans mistake correlation for causation so frequently that it&#8217;s truly not funny (although the site <a aria-label="Spurious Correlations  (opens in a new tab)" rel="noreferrer noopener" href="http://www.tylervigen.com/spurious-correlations" target="_blank">Spurious Correlations </a>has some really great examples on it, such as the relationship between the divorce rate in Maine and sales of margarine). </p>



<p>Visual patterns can be tremendously appealing.  The arrangement of a honeycomb, the fronds of a fern, or the shapes of snowflakes can all be delightful to us. The rule of thirds is a pattern in photographic composition that is almost universally accepted. Similarly, patterns in mathematics or music can be hugely satisfying to our brains.</p>



<p>We&#8217;ve communicated these patterns and their ramifications through language, art, science, music, and culture throughout the generations.  Being able to recognise patterns is of importance in all sorts of fields.  Biologists can use pattern-based knowledge to identify pests.  Medical experts can use them to identify human diseases through visual identification or the description of symptoms and their spread.  Technologists can use patterns for short-cuts in code or for speedier troubleshooting.  This use of patterns has been a huge advantage to us &#8211; and we are very good at it.</p>



<p>Max Tegmark, in Life 3.0, argues that we may be losing that evolutionary high ground.  Computers are now faster and more effective than humans in identifying patterns in key areas.  Self-driving cars have comprehensively demonstrated the capability of AI-driven visual pattern matching (along with many other complex decision-making algorithms).&nbsp; In addition, despite some highly publicised incidents (and <a aria-label="one in which Tesla's Autopilot feature failed to differentiate (opens in a new tab)" rel="noreferrer noopener" href="https://www.nytimes.com/2017/01/19/business/tesla-model-s-autopilot-fatal-crash.html" target="_blank">one in which Tesla&#8217;s Autopilot feature failed to differentiate</a> between the side of a trailer and the brightly-lit sky), self-driving cars are arguably safer than human drivers.</p>



<p>In the medical arena, AI is proving to be as good as, if not better than, trained radiologists and other specialists at <a aria-label="identifying injuries and disease (opens in a new tab)" rel="noreferrer noopener" href="https://news.stanford.edu/2017/11/15/algorithm-outperforms-radiologists-diagnosing-pneumonia/" target="_blank">identifying injuries and disease</a>.&nbsp;&nbsp; Articles on selective inattention <a aria-label="such as this one  (opens in a new tab)" rel="noreferrer noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3964612/" target="_blank">such as this one </a>describe how trained experts missed the image of a gorilla inserted into medical images, <strong>despite looking right at the gorilla. </strong> This paper was based on Daniel Simons&#8217; Monkey Business Illusion, which I was gobsmacked by when I saw it first.</p>



<p>The ability of AI to identify patterns in large amounts of information is considerably better than ours. We don&#8217;t have the processing capacity or &#8220;attentional&#8221; capability to review very large volumes of data.&nbsp; We become tired, we lose focus, we become distracted.  Computers can run 24 x 7 processing trillions of bits of information without fatigue or mistake.<br></p>



<p>So does this mean we&#8217;re irrelevant, and doomed to be jobless?</p>



<p>I would argue strongly that the answer is a resounding no.  There are still areas where human expertise has not been outstripped by the use of computers.  We are able to interpret patterns of human behaviour and the demonstration and expression of emotion and develop appropriate responses.  Human-human interaction is still a huge part of how the world operates (without it we would clearly not exist).  Our ability to excel in the workplace and elsewhere should not be threatened by the addition of AI &#8211; it should be enhanced by it.</p>



<p>In my opinion, the most interesting writing about AI at the moment is not focussed on how computers will displace us, such as the many sensationalist &#8220;AI is coming for your job!&#8221; headlines.  It is about how we can leverage AI to improve our capabilities in partnership with technology. </p>



<p>We&#8217;ve lost the pattern-matching crown (in some areas many years ago), just as we lost the automotive construction battle, and the chess-playing trophy, and as we will cede many other areas to technology in the future.  The great thing about humans is we always find new problems to solve (or failing that, create them for ourselves). </p>



<p></p>



<p></p>



<p></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Should we be afraid of AI?</title>
		<link>https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=should-we-be-afraid-of-ai</link>
					<comments>https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/#respond</comments>
		
		<dc:creator><![CDATA[Ger]]></dc:creator>
		<pubDate>Sun, 27 Jan 2019 13:46:25 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.amusingmulcahy.com/?p=335</guid>

					<description><![CDATA[There have been warnings for many years about the potential for disaster with Artificial Intelligence implementations.  Many luminaries, from Elon Musk to Stephen Hawking, have warned about the implications if we unwittingly create a robotic overlord who deems that we are irrelevant at best and destructive at worst, and decides the world will be better … <a href="https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/" class="more-link">Continue reading<span class="screen-reader-text"> "Should we be afraid of AI?"</span></a>]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" class="alignleft  wp-image-336" src="https://www.amusingmulcahy.com/wp-content/uploads/2019/01/brain_AI.png" alt="" width="162" height="137" />There have been warnings for many years about the potential for disaster with Artificial Intelligence implementations.  Many luminaries, from Elon Musk to Stephen Hawking, have warned about the implications if we unwittingly create a robotic overlord who deems that we are irrelevant at best and destructive at worst, and decides the world will be better off without us.</p>
<p>So why am I concerned now, and should you be?</p>
<p><span id="more-335"></span></p>
<p>The trigger for me writing this piece may seem strange.  <a href="https://deepmind.com/">DeepMind</a> AI  agents <a href="https://www.extremetech.com/gaming/284441-deepmind-ai-challenges-pro-starcraft-ii-players-wins-almost-every-match" target="_blank" rel="noopener">recently won</a> the majority of a series of StarCraft II Real Time Strategy (RTS) games against top professional players, beating the humans 10 out of 11 games.  RTS games can be pretty complex, and while we&#8217;ve seen human champions (for example Kasparov and Ke Jie for chess and Go respectively) fall before, this is the first time that the complexities of Starcraft II have been mastered by an AI agent. Why is this significant?</p>
<p>Unlike in a game like chess, the playing area in StarCraft II is large, and due to &#8220;fog of war&#8221; (a veil over unexplored areas) at the start of the game the contents of the map are invisible to the player. In other words, you can&#8217;t see what your opponent is doing until you&#8217;ve explored the map or their units show up in your part of the map.  There are a huge number of variables in choosing which strategy to adopt &#8211; there is no &#8220;best strategy&#8221; for victory &#8211; early rushes, where a low-tech, high volume army charges and wipes out the opponent&#8217;s base can be successful, but similarly, long, drawn-out matches are common.</p>
<p>Poor resource management decisions early in the game (e.g. planning to invest in one technology and neglecting others) can have long-term implications.  And the variability in capability and unit strengths for each of the three races also impacts how a strategy is built.</p>
<p>In addition, as the name of the game category suggests, all action takes place in real time.  Constantly changing variables mean that there are multiple decisions that need to be made at any point in time, and there is a requirement to constantly adjust based on new information.</p>
<p>Again, &#8220;So What?&#8221;, I hear the non-gamers out there ask.  Why should you care?  Tim Urban, on his hugely interesting site &#8220;<a href="https://waitbutwhy.com/">Wait but Why</a>&#8221; in a two-part post on AI, positions the situation like this &#8211; AI will either mean our eventual ascendancy to immortality (something I think is pretty horrifying) or our falling off the evolutionary balance beam into extinction.</p>
<p>The key point in his argument for me is that should AI ever reach the level of AGI (Artificial General Intelligence) it will so rapidly outstrip us and achieve ASI (Artificial Super Intelligence) that we won&#8217;t even  be able to comprehend how vastly superior the new intellect (and our new overlord) is.  For a visual representation of what that might look like, take a look at his <a href="https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html" target="_blank" rel="noopener">Intelligence Staircase</a>.</p>
<p>In the context of StarCraft II, the ability of the AI to conduct 200 years of training in a single week (at current silicon speeds) is something that shows the evolutionary advantage a computer-based intelligence has over a human.  To do the same level of training would take a human .. 200 years 🙂  We can&#8217;t operate faster than our biology allows.  We&#8217;re not able to test, analyse and adjust strategies as quickly or accurately as an AI agent would.  We&#8217;re also hampered by cognitive biases, a lack of perfect recall and an inability to execute more than a single task at a time.</p>
<p>We&#8217;re seeing real progress in the implementation of autonomous vehicles.  AI-driven assistants are becoming more and more part of our lives.  Deep-learning algorithms are mining the world&#8217;s data, driving everything from investment decisions to medical diagnoses. The technologies to develop, for example, lethal autonomous weapons (or killer robots, for our sci-fi aficionados) are all available today.  We have a huge proliferation of ANIs (Artificial Narrow Intelligences) which are good at one specific task or set of tasks.</p>
<p>If we are not intentional about the path we&#8217;re following, it is not a huge leap to posit a situation where we inadvertently create an overarching AGI, then ASI, which is neither malevolent nor benevolent.  It will just be Other, and, as with the AI that beat the StarCraft II champions, its decision-making will be largely incomprehensible to human observers.   If you really want to be worried about this, Nick Bostrom&#8217;s book, SuperIntelligence, paints a bleak picture indeed.</p>
<p>While there are clearly many advantages to leveraging AI, we need to be much more aware of the implications.  And while the recent developments may seem trivial, they are one step more along a path to humanity no longer being the &#8220;smartest&#8221; entity on the planet, will all that may entail.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
