We’ve reached an inflection point with AI. It’s in the news we consume and the general cultural conversation, but there are also more tangible markers we can point to.
In just two years Amazon’s Echo, and as a result Alexa, is in more than six per cent of American homes. By 2020, people will be making 200 billion voice searches every month. Or, if you prefer using the market as a bellwether, $5.4 billion (£4.2bn) was invested in AI startups in 2016 – a sum that doesn’t reflect the amount internal R&D groups poured into AI in that same time frame. This would likely dwarf that number.
While the fact that, in general, people no longer associate AI with Skynet bodes well for the technology’s role in our day-to-day lives, that fear of AI insurrection has been replaced with new worries. A 2016 Cornell study found that since 2009, while conversations about AI have been “consistently optimistic”, worries over loss of control, ethics, and automation have grown in that same time span.
What’s more, there are negative AI conversations happening that most people don’t associate with AI. Post-election, conversations about filter bubbles have pervaded – the filter, in this case, being an algorithm learning and evolving based on advances in deep learning. Any conversation we have about big data is quietly a conversation about AI. A self-driving car is powered by AI. Google has recently applied deep learning AI to Google Translate in search of “one shot” translations – essentially making cross-language translations it’s never explicitly been asked. Netflix’s algorithm tries to give you what you want, and Spotify uses AI to create your Discover Weekly playlist. It’s even part of the way we have sex – dating apps like Tinder have algorithms that learn what users want, and change the way they present your profile. There is no escaping it.
So as a human, a person caught in this paradigm shift, how are you supposed to navigate this system? How do you assure that AI is working for you, and not the other way around?
Track the AI already in your life
Take a look under your own digital hood. What companies are you giving data to, and for what reasons? Ultimately, most companies are using AI to improve the product or service they’re delivering you.However, if your fear is AI taking over without you realising it, you need to start paying attention now. We all know we’re giving Google and Facebook our data, those are obvious, but (almost) every digital service is collecting usage data. If “data is the new oil”, you need to be aware of all the places your crude is being shipped. Conduct an audit of the digital services you’re using on a daily basis. If you don’t want to share with them, simple steps like clearing your browsing history and internet cache can help a little. But because of terms of service, at this point, usage is tacit acceptance.
Break it!
When you know that an AI is being used, you can break it. This is the big solve for our fears about filter bubbles. Filter bubbles are created when algorithms think they know us, the issue though is that they’re self-perpetuating – an AI feeds you what it thinks you want, you like it, and it will hone in closer and closer to one single thing be that music, articles and so on.
If you don’t want to be in a bubble, you can pop it by feeding the AI a bunch of information that isn’t accurate. If you want to learn more about trap music, play Gucci Mane albums so your Discover Weekly reflects this. If you want Tinder to stop feeding you clones, start swiping left on the feature all your dates seem to have. If you want Facebook serving you alternative viewpoints, find a group of respectable people on the other side and like their pages.
You have more control than you might think, if you’re willing to break things.
Vote with your money
It goes without saying, but every pound you spend is a vote for the perpetuation of what you spend it on. This policy applies for all AI arenas. Almost all categories where AI is having an impact have a competitor meaning you still have a choice to pick the platform with the best policy.
This is why Lyft suddenly has a shot at stealing Uber’s lunch; markets correct bad behaviour when you apply ethics to your consumption. Right now everything in AI is being driven by market forces, so conscious capitalism will play in a role in what our future AI overlords ultimately look like. Just kidding. They’ll look like lines of code.
The great part about inflection points is that things are still being figured out; the future isn’t written. If you’re looking for a moment of optimism, a good source is in the idea of “centaurs” – human-AI hybrids first introduced in Freestyle Chess. These hybrids beat all AI-only and human-only teams by leveraging the best that both humans and AI have to offer.
Additionally, there is growing evidence our brains are actually quantum computers, which makes us a valuable partner to an AI, at least for the time being…+1 for intuition! Elon Musk’s most recent talking point about the idea of a Neural lace, an idea deeply embedded in science fiction like Iain M. Banks Culture series, is essentially just making us more valuable centaurs.
Finding ways to work with AI is the only way we’ll be able to prevent it from taking over, and if we want to do that, we need to choose the way it enters our lives. Think of AI as the child of humanity, if it’s misbehaving, there’s no-one to blame but ourselves.
[Source:- Wired]