<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Technology &#8211; A musing Mulcahy</title>
	<atom:link href="https://www.amusingmulcahy.com/category/technology/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.amusingmulcahy.com</link>
	<description>Management, technology, random thoughts</description>
	<lastBuildDate>Sun, 23 Oct 2022 13:28:16 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Cloud Security is Simple</title>
		<link>https://www.amusingmulcahy.com/cloud-security-is-simple/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=cloud-security-is-simple</link>
					<comments>https://www.amusingmulcahy.com/cloud-security-is-simple/#respond</comments>
		
		<dc:creator><![CDATA[Ger]]></dc:creator>
		<pubDate>Sun, 23 Oct 2022 13:25:39 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Cloud Security]]></category>
		<category><![CDATA[Management]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[#Cloud #CloudSecurity]]></category>
		<guid isPermaLink="false">https://www.amusingmulcahy.com/?p=1230</guid>

					<description><![CDATA[Cloud Security principles appear simple, but execution becomes incredibly complex at scale.]]></description>
										<content:encoded><![CDATA[
<p>If you are working in Cloud Security in any area (or Cloud assurance or Governance), the title of this post probably caught your attention.  You may have thought to yourself, &#8220;ah, clickbait!&#8221;.  While this is a somewhat attention-grabbing statement, I don&#8217;t intend it to be clickbait. Instead, I hope to spark some discussion about why something simple on its face is simultaneously very difficult to get right.</p>



<span id="more-1230"></span>



<p>I sometimes have conversations with people who don&#8217;t know me well and think I&#8217;m nice (or even &#8220;too nice&#8221;).  Depending on the nature of the conversation, I may correct them.  I intend to be <strong>kind</strong>, which appears as niceness but is fundamentally different.  People sometimes draw a false equivalence between the attributes of kindness and those of niceness.   Someone can be kind but not particularly nice.  As I&#8217;ve written elsewhere, it can be kind to give someone extremely blunt feedback, but it may not feel nice to the person receiving it.  Similarly, something that is simple, or made up of simple components, is not necessarily easy. For example, the idea of climbing a mountain is simple to comprehend but potentially very challenging to execute.</p>



<p>What do I mean when I say Cloud Security is simple?  The principles that drive Cloud Security are really straightforward. First, have a good governance structure.  Ensure that your identity and access management practices are based on least privilege and maintain that stance.  Ensure visibility everywhere in your environment.  Put appropriate controls in place to segment your Cloud platform so that a compromise in one area is contained.  Deploy technology using patterns and maintain your configurations through constant checking and automation.  Detect unusual events quickly and provide actionable information to critical stakeholders promptly.  Automate heavily. These are simple concepts, even for non-technologists.</p>



<p>However, the execution of Cloud Security at scale is anything but easy.  Let&#8217;s take the area of entitlements, for example.  Maintaining a consistent view of all of the entitlements held by every human and machine identity at scale is incredibly challenging.  While the emerging product field of Cloud Identity and Entitlements Management (CIEM) intends to tackle this challenge, the solutions and market are immature.  Microsoft&#8217;s recent acquisition of CloudKnox, now rebranded as part of the Entra product family, is a case in point.  Entra is an interesting product providing information on Role-Based Access Control (RBAC) entitlements for Azure and other Cloud environments.  Still, it does not yet give a view of Azure Active Directory entitlements.  The combination of roles and entitlements between Azure AD and Azure RBAC is a critical view to have to identify potentially undesirable (toxic) combinations.</p>



<p>Without appropriately mature tooling, it is practically impossible for any Cloud Operations or Cloud Security Operations team to understand all entitlements held by any single identity or security principal.  Given the number of breaches caused or facilitated by overprivileged credentials, this area desperately needs improved capability.</p>



<p>So Cloud Security is not easy, even if it is conceptually simple.  An analogy struck me relating to DNA.  The four bases that form DNA are relatively simple components. However, combined in an incredibly variable manner, they can create hugely complex organisms, ranging from a blue whale to a human to a fruit fly.  Similarly, the variability of the underlying services in a Cloud environment and their combinations make securing Cloud solutions at scale incredibly challenging.  Simple components build towards extremely complex &#8220;organic&#8221; ecosystems.  As the line between IaaS and PaaS solutions becomes ever more blurred, the combinations increase in variability and complexity.</p>



<p>In a DNA-driven world, how the bases combine is governed by straightforward principles—Adenine pairs with Thymine, and Cytosine pairs with Guanine.   During DNA replication, enzymes check to ensure that the correct bases have been added to the chain.  If there are errors, they are removed at the source before the DNA is &#8220;written&#8221;.</p>



<p>In Cloud Security, we can keep our organisations focussed on the simple principles that will help us manage complexity at scale.  From a practical perspective, we can ensure that we build environments using Infrastructure as Code (IaC) which is version controlled. We can wrap IaC templates with Policy as Code pre-deployment checks.  We can validate from a post-deployment perspective that what we intended to build is actually running using posture management and workload protection tools.  And we can continue to educate our broader organisations that what appears simple is not easy.  The lure of the Cloud is powerful, and the concepts of it are simple. However, the reality of how to get there safely is highly complex and requires the appropriate preparation, training and tooling to avoid disaster. </p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.amusingmulcahy.com/cloud-security-is-simple/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Pattern matching and the lost evolutionary high ground</title>
		<link>https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=pattern-matching-and-the-lost-evolutionary-high-ground</link>
					<comments>https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/#respond</comments>
		
		<dc:creator><![CDATA[Ger]]></dc:creator>
		<pubDate>Wed, 01 May 2019 18:47:26 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.amusingmulcahy.com/?p=391</guid>

					<description><![CDATA[The human brain loves patterns. We identify patterns everywhere, even where they don’t exist. In the past, this has been of evolutionary benefit. Being able to recognise poisonous tree frogs or tigers or gaps in a forest floor, or tribe members who look like us have all been helpful to our survival. The brain is … <a href="https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/" class="more-link">Continue reading<span class="screen-reader-text"> "Pattern matching and the lost evolutionary high ground"</span></a>]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-image"><figure class="alignleft is-resized"><img decoding="async" src="https://www.amusingmulcahy.com/wp-content/uploads/2019/05/patterns.png" alt="" class="wp-image-423" width="238" height="207"/></figure></div>



<p>The human brain loves patterns.  We identify patterns everywhere, even where they don&#8217;t exist.  In the past, this has been of evolutionary benefit. Being able to recognise poisonous tree frogs or tigers or gaps in a forest floor, or tribe members who look like us have all been helpful to our survival.  The brain is an expensive engine to run, and any heuristic that helps us be more efficient saves us energy. </p>


<p><span id="more-391"></span><!--more--></p>


<p>Until quite recently, we thought we were the rulers of the world of broad pattern-matching.  Sure, animals could recognise patterns for the same evolutionary reasons we could, but from the point of view of image-based pattern matching (as in Recaptchas where one has to recognise all of the images with traffic lights, for example), or being able to create relationships between language, images and sounds (the word &#8220;cat&#8221;, a picture of a cat and sound of a cat), we pretty much had the mountain peaks to ourselves. </p>



<p>We learn about patterns by experience.  Ray Dalio writes in Principles about the benefit of being able to identify &#8220;another one of those&#8221; based on the study of past events or past experiences.   This recognition of patterns aids in decision-making and provides comfort that we&#8217;ve seen this kind of problem before.</p>



<p>We sometimes misidentify patterns.  Humans mistake correlation for causation so frequently that it&#8217;s truly not funny (although the site <a aria-label="Spurious Correlations  (opens in a new tab)" rel="noreferrer noopener" href="http://www.tylervigen.com/spurious-correlations" target="_blank">Spurious Correlations </a>has some really great examples on it, such as the relationship between the divorce rate in Maine and sales of margarine). </p>



<p>Visual patterns can be tremendously appealing.  The arrangement of a honeycomb, the fronds of a fern, or the shapes of snowflakes can all be delightful to us. The rule of thirds is a pattern in photographic composition that is almost universally accepted. Similarly, patterns in mathematics or music can be hugely satisfying to our brains.</p>



<p>We&#8217;ve communicated these patterns and their ramifications through language, art, science, music, and culture throughout the generations.  Being able to recognise patterns is of importance in all sorts of fields.  Biologists can use pattern-based knowledge to identify pests.  Medical experts can use them to identify human diseases through visual identification or the description of symptoms and their spread.  Technologists can use patterns for short-cuts in code or for speedier troubleshooting.  This use of patterns has been a huge advantage to us &#8211; and we are very good at it.</p>



<p>Max Tegmark, in Life 3.0, argues that we may be losing that evolutionary high ground.  Computers are now faster and more effective than humans in identifying patterns in key areas.  Self-driving cars have comprehensively demonstrated the capability of AI-driven visual pattern matching (along with many other complex decision-making algorithms).&nbsp; In addition, despite some highly publicised incidents (and <a aria-label="one in which Tesla's Autopilot feature failed to differentiate (opens in a new tab)" rel="noreferrer noopener" href="https://www.nytimes.com/2017/01/19/business/tesla-model-s-autopilot-fatal-crash.html" target="_blank">one in which Tesla&#8217;s Autopilot feature failed to differentiate</a> between the side of a trailer and the brightly-lit sky), self-driving cars are arguably safer than human drivers.</p>



<p>In the medical arena, AI is proving to be as good as, if not better than, trained radiologists and other specialists at <a aria-label="identifying injuries and disease (opens in a new tab)" rel="noreferrer noopener" href="https://news.stanford.edu/2017/11/15/algorithm-outperforms-radiologists-diagnosing-pneumonia/" target="_blank">identifying injuries and disease</a>.&nbsp;&nbsp; Articles on selective inattention <a aria-label="such as this one  (opens in a new tab)" rel="noreferrer noopener" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3964612/" target="_blank">such as this one </a>describe how trained experts missed the image of a gorilla inserted into medical images, <strong>despite looking right at the gorilla. </strong> This paper was based on Daniel Simons&#8217; Monkey Business Illusion, which I was gobsmacked by when I saw it first.</p>



<p>The ability of AI to identify patterns in large amounts of information is considerably better than ours. We don&#8217;t have the processing capacity or &#8220;attentional&#8221; capability to review very large volumes of data.&nbsp; We become tired, we lose focus, we become distracted.  Computers can run 24 x 7 processing trillions of bits of information without fatigue or mistake.<br></p>



<p>So does this mean we&#8217;re irrelevant, and doomed to be jobless?</p>



<p>I would argue strongly that the answer is a resounding no.  There are still areas where human expertise has not been outstripped by the use of computers.  We are able to interpret patterns of human behaviour and the demonstration and expression of emotion and develop appropriate responses.  Human-human interaction is still a huge part of how the world operates (without it we would clearly not exist).  Our ability to excel in the workplace and elsewhere should not be threatened by the addition of AI &#8211; it should be enhanced by it.</p>



<p>In my opinion, the most interesting writing about AI at the moment is not focussed on how computers will displace us, such as the many sensationalist &#8220;AI is coming for your job!&#8221; headlines.  It is about how we can leverage AI to improve our capabilities in partnership with technology. </p>



<p>We&#8217;ve lost the pattern-matching crown (in some areas many years ago), just as we lost the automotive construction battle, and the chess-playing trophy, and as we will cede many other areas to technology in the future.  The great thing about humans is we always find new problems to solve (or failing that, create them for ourselves). </p>



<p></p>



<p></p>



<p></p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.amusingmulcahy.com/pattern-matching-and-the-lost-evolutionary-high-ground/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Should we be afraid of AI?</title>
		<link>https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=should-we-be-afraid-of-ai</link>
					<comments>https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/#respond</comments>
		
		<dc:creator><![CDATA[Ger]]></dc:creator>
		<pubDate>Sun, 27 Jan 2019 13:46:25 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.amusingmulcahy.com/?p=335</guid>

					<description><![CDATA[There have been warnings for many years about the potential for disaster with Artificial Intelligence implementations.  Many luminaries, from Elon Musk to Stephen Hawking, have warned about the implications if we unwittingly create a robotic overlord who deems that we are irrelevant at best and destructive at worst, and decides the world will be better … <a href="https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/" class="more-link">Continue reading<span class="screen-reader-text"> "Should we be afraid of AI?"</span></a>]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" class="alignleft  wp-image-336" src="https://www.amusingmulcahy.com/wp-content/uploads/2019/01/brain_AI.png" alt="" width="162" height="137" />There have been warnings for many years about the potential for disaster with Artificial Intelligence implementations.  Many luminaries, from Elon Musk to Stephen Hawking, have warned about the implications if we unwittingly create a robotic overlord who deems that we are irrelevant at best and destructive at worst, and decides the world will be better off without us.</p>
<p>So why am I concerned now, and should you be?</p>
<p><span id="more-335"></span></p>
<p>The trigger for me writing this piece may seem strange.  <a href="https://deepmind.com/">DeepMind</a> AI  agents <a href="https://www.extremetech.com/gaming/284441-deepmind-ai-challenges-pro-starcraft-ii-players-wins-almost-every-match" target="_blank" rel="noopener">recently won</a> the majority of a series of StarCraft II Real Time Strategy (RTS) games against top professional players, beating the humans 10 out of 11 games.  RTS games can be pretty complex, and while we&#8217;ve seen human champions (for example Kasparov and Ke Jie for chess and Go respectively) fall before, this is the first time that the complexities of Starcraft II have been mastered by an AI agent. Why is this significant?</p>
<p>Unlike in a game like chess, the playing area in StarCraft II is large, and due to &#8220;fog of war&#8221; (a veil over unexplored areas) at the start of the game the contents of the map are invisible to the player. In other words, you can&#8217;t see what your opponent is doing until you&#8217;ve explored the map or their units show up in your part of the map.  There are a huge number of variables in choosing which strategy to adopt &#8211; there is no &#8220;best strategy&#8221; for victory &#8211; early rushes, where a low-tech, high volume army charges and wipes out the opponent&#8217;s base can be successful, but similarly, long, drawn-out matches are common.</p>
<p>Poor resource management decisions early in the game (e.g. planning to invest in one technology and neglecting others) can have long-term implications.  And the variability in capability and unit strengths for each of the three races also impacts how a strategy is built.</p>
<p>In addition, as the name of the game category suggests, all action takes place in real time.  Constantly changing variables mean that there are multiple decisions that need to be made at any point in time, and there is a requirement to constantly adjust based on new information.</p>
<p>Again, &#8220;So What?&#8221;, I hear the non-gamers out there ask.  Why should you care?  Tim Urban, on his hugely interesting site &#8220;<a href="https://waitbutwhy.com/">Wait but Why</a>&#8221; in a two-part post on AI, positions the situation like this &#8211; AI will either mean our eventual ascendancy to immortality (something I think is pretty horrifying) or our falling off the evolutionary balance beam into extinction.</p>
<p>The key point in his argument for me is that should AI ever reach the level of AGI (Artificial General Intelligence) it will so rapidly outstrip us and achieve ASI (Artificial Super Intelligence) that we won&#8217;t even  be able to comprehend how vastly superior the new intellect (and our new overlord) is.  For a visual representation of what that might look like, take a look at his <a href="https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html" target="_blank" rel="noopener">Intelligence Staircase</a>.</p>
<p>In the context of StarCraft II, the ability of the AI to conduct 200 years of training in a single week (at current silicon speeds) is something that shows the evolutionary advantage a computer-based intelligence has over a human.  To do the same level of training would take a human .. 200 years 🙂  We can&#8217;t operate faster than our biology allows.  We&#8217;re not able to test, analyse and adjust strategies as quickly or accurately as an AI agent would.  We&#8217;re also hampered by cognitive biases, a lack of perfect recall and an inability to execute more than a single task at a time.</p>
<p>We&#8217;re seeing real progress in the implementation of autonomous vehicles.  AI-driven assistants are becoming more and more part of our lives.  Deep-learning algorithms are mining the world&#8217;s data, driving everything from investment decisions to medical diagnoses. The technologies to develop, for example, lethal autonomous weapons (or killer robots, for our sci-fi aficionados) are all available today.  We have a huge proliferation of ANIs (Artificial Narrow Intelligences) which are good at one specific task or set of tasks.</p>
<p>If we are not intentional about the path we&#8217;re following, it is not a huge leap to posit a situation where we inadvertently create an overarching AGI, then ASI, which is neither malevolent nor benevolent.  It will just be Other, and, as with the AI that beat the StarCraft II champions, its decision-making will be largely incomprehensible to human observers.   If you really want to be worried about this, Nick Bostrom&#8217;s book, SuperIntelligence, paints a bleak picture indeed.</p>
<p>While there are clearly many advantages to leveraging AI, we need to be much more aware of the implications.  And while the recent developments may seem trivial, they are one step more along a path to humanity no longer being the &#8220;smartest&#8221; entity on the planet, will all that may entail.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.amusingmulcahy.com/should-we-be-afraid-of-ai/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A May-December Romance &#8211; Public Cloud Providers &#038; Large Enterprises</title>
		<link>https://www.amusingmulcahy.com/a-may-december-romance-public-cloud-providers-large-enterprises/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-may-december-romance-public-cloud-providers-large-enterprises</link>
					<comments>https://www.amusingmulcahy.com/a-may-december-romance-public-cloud-providers-large-enterprises/#respond</comments>
		
		<dc:creator><![CDATA[Ger]]></dc:creator>
		<pubDate>Thu, 04 Oct 2018 11:05:57 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.amusingmulcahy.com/?p=201</guid>

					<description><![CDATA[It’s to be expected, really.  You want to go out clubbing, and the object of your affections, who is significantly older than you, wants to get an early night because they have a parent-teacher meeting first thing in the morning. Clearly, in this scenario, retail Public Cloud Providers (PCPs) are the younger member of the … <a href="https://www.amusingmulcahy.com/a-may-december-romance-public-cloud-providers-large-enterprises/" class="more-link">Continue reading<span class="screen-reader-text"> "A May-December Romance – Public Cloud Providers & Large Enterprises"</span></a>]]></description>
										<content:encoded><![CDATA[<p><img decoding="async" class="alignleft size-full wp-image-217" src="https://www.amusingmulcahy.com/wp-content/uploads/2018/10/heart_cloud.png" alt="" width="200" height="139" />It&#8217;s to be expected, really.  You want to go out clubbing, and the object of your affections, who is significantly older than you, wants to get an early night because they have a parent-teacher meeting first thing in the morning.</p>
<p>Clearly, in this scenario, retail Public Cloud Providers (PCPs) are the younger member of the relationship &#8211; looking to move fast and break things, as it were.  Large, regulated Enterprises are the older partner, looking to put their feet up at the end of a complicated and trying day.  They can&#8217;t move as fast as the PCPs, because they have accumulated responsibilities in the form of regulatory and board oversight, and have less agility in their old bones than the more nimble PCPs.<span id="more-201"></span></p>
<p>When PCPs are dating/courting startups, or Small-Medium Enterprises (SMEs) in less regulated spaces, the pace of adoption and engagement in the relationship is often faster and less complicated.  This is because the younger organisations in the partnership may have similar interests, with fewer individual areas of specific concern.   The equation is one of best price and warranty for the service provided, and it often ends there.</p>
<p>Newer organisations tend to be more cloud-ready as well &#8211; they&#8217;ve grown up with modern application methods and technology and are carrying less technical debt.  Many large enterprises have applications that pre-date the foundation of today&#8217;s PCPs &#8211; and the management of these legacy applications can be critical to the day-to-day functioning of the enterprise.</p>
<p>Larger Enterprises also have a significant overhead in the form of regulations that they are accountable to meet, security and compliance controls to adhere to and boards that may be highly risk averse.</p>
<p>Does this &#8220;age-gap&#8221; means that the relationship is doomed?</p>
<p>Not necessarily; it just means that the younger partner needs to be more aware of the specific needs of their older paramour.   Some PCPs are clearly very aware of this.  For example, Microsoft Azure is focussing heavily on providing hybrid cloud services, including Active Directory integration and <a href="https://azure.microsoft.com/en-us/overview/azure-stack/">Azure stack</a> for on-premises use to enable the lines between the enterprise and the PCP to be less of a challenge to adoption.</p>
<p>In their turn, the elder partner needs to be aware that contracts and master services agreements may need to be reassessed while still maintaining the appropriate risk management posture.  Workload selection also has to be carefully managed &#8211; the &#8220;wrong&#8221; workloads migrating to a cloud environment will clearly result in unhappy outcomes.</p>
<p>What else can PCPs do to help?  They can make information readily available to simplify the transition to a cloud environment by making compliance resources available, as AWS did recently with their <a href="https://aws.amazon.com/compliance/">Cloud Compliance center.</a></p>
<p>They can simplify and make more transparent their usage and pricing structures.  For the past few years, articles have been published (e.g. <a href="https://www.technative.io/why-businesses-are-exiting-the-public-cloud/">here</a>, <a href="https://www.sdxcentral.com/articles/contributed/public-cloud-fatigue-why-more-organizations-are-rethinking-their-cloud-strategies/2017/10/">here</a> and <a href="https://www.forbes.com/sites/netapp/2016/03/16/will-companies-born-in-the-cloud-become-trapped-there/#4f222b3e4a5f">here</a>) about businesses pulling workloads back from public cloud environments, in part because of &#8220;sticker shock&#8221;.   (There are other reasons, including availability issues that have been identified as driving the pullback).</p>
<p>In addition, PCPs can put together chains of services that legacy application managers can consume more readily.   For example, AWS Lightsail, Farscape, Beanstalk and Migration Services are a step in the right direction.  These still don&#8217;t remove the overwhelming variety and complexity of sub-services that PCPs offer, but compared to e.g. Google Cloud Platform, they provide a friendlier face to newcomers to public cloud environments.</p>
<p>Is this May-December relationship still worth pursuing?  Absolutely &#8211; because there can be significant value for both parties if approached correctly.  Large Enterprises can use public cloud environments as a catalyst for application modernisation, risk reduction and capital budget weight-loss (although OpEx clearly needs to be very carefully managed).  PSPs can benefit from the steady, predictable income that a well-funded, firmly established partner can provide (and can learn how to further develop offerings that are suitable to these organisations, growing the business).  As with any relationship, the key is to ensure that expectations are set appropriately at the outset, and then met or exceeded.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.amusingmulcahy.com/a-may-december-romance-public-cloud-providers-large-enterprises/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A tale of two (fitness) trackers</title>
		<link>https://www.amusingmulcahy.com/a-tale-of-two-fitness-trackers/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-tale-of-two-fitness-trackers</link>
					<comments>https://www.amusingmulcahy.com/a-tale-of-two-fitness-trackers/#respond</comments>
		
		<dc:creator><![CDATA[Ger]]></dc:creator>
		<pubDate>Sun, 09 Sep 2018 08:46:12 +0000</pubDate>
				<category><![CDATA[Consumer tech]]></category>
		<category><![CDATA[Technology]]></category>
		<guid isPermaLink="false">https://www.amusingmulcahy.com/?p=147</guid>

					<description><![CDATA[I was trying to think what the collective term for fitness watches and trackers might be (akin to a murder of crows, or a sloth of bears) and what came to mind was “a frustration”. Wearable technology is increasingly becoming part of the world we live in.  From ingestible medical devices to AR headsets, from … <a href="https://www.amusingmulcahy.com/a-tale-of-two-fitness-trackers/" class="more-link">Continue reading<span class="screen-reader-text"> "A tale of two (fitness) trackers"</span></a>]]></description>
										<content:encoded><![CDATA[<p>I was trying to think what the collective term for fitness watches and trackers might be (akin to a murder of crows, or a sloth of bears) and what came to mind was &#8220;a frustration&#8221;.</p>
<p>Wearable technology is increasingly becoming part of the world we live in.  From ingestible medical devices to AR headsets, from clothing with connectivity to <a href="https://www.wareable.com/running/shft-iq-wearable-running-coach-7111" target="_blank" rel="noopener">gadgets to make your running smarter</a>, we are buying more and more tech to generate more and more data about our lives.  Allegedly at least, making us more productive, fitter and healthier.<span id="more-147"></span></p>
<p>These devices promise so much, and sometimes deliver nothing but broken promises and disappointments.  Even the veterans of the bunch, the fitness trackers, are inconsistent, sometimes temperamental, and occasionally as forgetful as a regular person (&#8220;what device was I supposed to be connected to?&#8221;).</p>
<p>I&#8217;ve had two main fitness trackers/&#8221;smart&#8221; watches over the past four years, and the experience with both had some of the above shortcomings, but I ended up really loving one of the trackers.</p>
<p>My current tracker is a Fitbit Blaze HR, and, well, it&#8217;s ok.  I specifically bought the HR model because, you guessed it, I wanted to track my heart rate while exercising.  Turns out there&#8217;s a catch with that.  There are a good number of posts in the Fitbit forums on how you need to move the strap several finger widths up your wrist and tighten the strap when exercising in order for the Blaze to get a good heart rate read.  In practical terms it&#8217;s fine for resting heart rate, but not so good for exercise, particularly anything like HIIT or hypertrophic lifts that cause your pulse to spike significantly.</p>
<p>I&#8217;ve gone through the stock silicone band since I first started using it just over a year ago &#8211; I&#8217;ve since replaced it with a metal one, which seems likely to last longer (and gives the watch a smarter look, in my opinion).</p>
<p>Prior to the Blaze, I had a Microsoft Band 2 for two years, and despite its idiosyncrasies it did everything I wanted it to.  It, too, struggled with heart rate accuracy. (Most LED-based trackers won&#8217;t hold a candle to a dedicated Polar-style ECG band around the chest, but I don&#8217;t like the faff associated with those.  Given how frequently I forget my water bottle, it&#8217;s also just one more thing to add to the list of gear needed to have a workout).</p>
<p>The critical difference for me between the two trackers was that the MS Band 2 very rarely lost sync with my phone, controlled music properly (which in the gym is a lot handier than taking your phone out and scrolling through playlists), and displayed just enough notifications to be useful.  The Blaze doesn&#8217;t do these things well.  It often loses sync, sometimes to the point where I have to reboot both phone and watch to get sync re-established, and then it only works after several tries. Its battery life is relatively short, and very unpredictable.  And for some reason it just doesn&#8217;t make me like it the way the MS Band 2 did.</p>
<p>So why am I using a fitness tracker I don&#8217;t like as much as the previous one?  I went through 4 Band 2 replacements under warranty in two years.  Three of the four failed because the wrist band, which was integrated into the face, split during use in the gym.  With some key electrical elements embedded in the strap, that was a fatal flaw.  Microsoft displayed the best customer support I&#8217;ve seen in some time &#8211; they replaced the failed trackers under warranty with no fuss (they clearly knew they had a problem), and then gave me a full refund after two years use when they acknowledged that they would no longer be able to provide replacements.  Given the responses we&#8217;re accustomed to getting as customers in a disposable age, I was really pleased and impressed by this level of service.</p>
<p>As a result, despite my frustrating experiences with fitness trackers to date (and talking to friends and trainers in the gym I know I&#8217;m not alone), I will be an eager buyer if Microsoft ever decided to release a Band 3.  Not because I expect the experience to be flawless, but because they have shown that they can provide a great customer experience for the product, and because to me the Band 2 had enough redeeming features to overcome its flaws.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.amusingmulcahy.com/a-tale-of-two-fitness-trackers/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
