Tag Archives: Artificial intelligence

What Makes AI Dangerous? The State, by Per Bylund

As with most new technologies that have both benefits and dangers, the dangers are most pronounced when the technology gets in the hands of governments. From Per Bylund at mises.org:

So I watched “Do you trust this computer?”, a film that “explores the promises and perils” of artificial intelligence. While it notes both the good and the bad, it has an obvious focus on how AI might bring about “the end of the world as we know it” (TEOTWAWKI.) That is, if it is left unregulated.

It’s strange, however, that the examples of TEOTWAWKI AI were “autonomous weapons” and “fake news,” the latter because of how it can provide a path for a minority-supported dictator to “take over.” While I understand (and fear) both, the examples have one thing in common – but it is not AI.

That one thing is the State. Only States’ militaries and groups looking to take over a State have any interest in “killer robots.” They’re also developed by/for those groups. The fake news and “undue influence” issue is also about the power over the State. Neither weapons nor fake news require AI. Yet, in some strange twist, the film makers make it an AI problem. Worse: they end the film indicating that the main problem is that AI is “unregulated.”

But this is completely illogical: with the State as the problem’s common denominator *and* the solution?

Instead, we’re led to believe that it is problematic that Google tracks our web searches and Facebook knows our friends and beliefs (“because autonomous weapons”?). While I agree that it is ugly, neither company is making a claim over life and death. In fact, they operate under the harshest regulation there is: the market. Because they are making investments to make money, and money can only be made in one of two ways: through offering something that people want and are willing to pay for (Oppenheimer’s “economic” means), or through simply taking it from people against their will (“political” means). Companies operate according to the former, which means they are subject to the mercy of consumers. The State operate according to the latter.

 

To continue reading: What Makes AI Dangerous? The State

Advertisements

Wells Fargo’s Artificial Intelligence Defies Analysts, Slaps “Sell” on Google and Facebook, by Wolf Richter

SLL’s bet is on the artificial, rather than the human, intelligence. From Wolf Richter at wolfstreet.com:

Google, which makes almost all of its money on ads and internet user data, is undertaking herculean efforts to get a grip on artificial intelligence (AI). It’s trying to develop software that allows machines to think and learn like humans. It’s spending enormous resources on it. This includes the $525 million acquisition in 2014 of DeepMind, which is said to have lost an additional $162 million in 2016. Google is trying to load smartphones with AI and come up with AI smart speakers and other gadgets, and ultimately AI systems that control self-driving cars.

Facebook, which also makes most of its money on ads and user data, is on a similar trajectory, but spreading into other directions, including a “creepy” run-in with two of its bots that were supposed to negotiate with each other but ended up drifting off human language and invented their own languagethat humans couldn’t understand.

And here comes an AI bot developed by stock analysts at Wells Fargo Securities. The human analysts have an “outperform” rating on Google’s parent Alphabet and on Facebook. They worked with a data scientist at Amazon’s Alexa project to create the AI bot. And after six months of work, the AI bot was allowed to do its job. According to their note to clients on Friday, reported by Bloomberg, the AI bot promptly slapped a “sell” rating on Google and Facebook.

Human analysts on Wall Street are famous for their incessantly optimistic ratings and outlooks. They generally only put a “sell” on a stock after it has already plunged. They’re part of Wall Street’s human hype machine. Their job is to help inflate stock prices and make CEOs feel good so that they will do business with the analysts’ firms and send fees their way. But Wells Fargo’s AI bot hasn’t gotten the memo.

Last month, a group led by Ken Sena, head of Global Internet Analyst at Wells Fargo Securities, introduced this “artificially intelligent equity research analyst” or AIERA. Its “primary purpose is to track stocks and formulate a daily, weekly, and overall view on whether the stocks tracked will go up or down,” Sena, said at the time.

So “she” did Big Data analysis of Alphabet, Facebook, and some other stocks, and after seeing what’s there, averted her eyes in disgust and slapped a “sell” recommendation on both stocks and a “hold” recommendation on 11 other cherished stocks.

To continue reading; Wells Fargo’s Artificial Intelligence Defies Analysts, Slaps “Sell” on Google and Facebook

How “Nothing to Hide” Leads to “Nowhere to Hide” – Why Privacy Matters in an Age of Tech Totalitarianism, from the Daily Bell

Your life may be on of pristine purity and goodness, but though you may think you have nothing to hide, you may still not want to give up your privacy. From the Daily Bell at dailybell.com:

Editor’s note: The following comes from a longtime journalist who specializes in writing for major media outlets and private companies about robots, Big Data, and Artificial Intelligence (AI).

Would you allow a government official into your bedroom on your honeymoon? Or let your mother-in-law hear and record every conversation that takes place in your home or car – especially disagreements with your husband or wife? Would you let a stranger sit in on your children’s playdates so that he could better understand how to entice them with candy or a doll?

Guess what? If you bring your phone with you everywhere, or engage with a whole-house robo helper such as Alexa or Echo or Siri or Google, you’re opening up every aspect of your life to government officials, snooping (possibly criminal) hackers, and advertisers targeting you, your spouse and your children.

The following is not a screed against technology. But it is a plea to consider what we’re giving up when we hand over privacy, wholesale, to people whom we can neither see nor hear… people whose motives we cannot fathom.

The widened lanes of communication, and the conveniences that Smart Phones, wireless communities, Big Data and Artificial Intelligence (AI) have fomented are indeed helpful to some extent. They allow, for example, for remote working, which allows people to spend more time with their families and less time commuting. In areas such as the energy business, the field of predictive analytics, born of Big Data and the Industrial Internet, helps mitigate the danger of sending humans to oil rigs at sea. 

And on a personal level, of course, the conveniences are innumerable: Grandparentsliving far away can “see” their grandchildren more often than they could in years past, thanks to technology such as FaceTime and Skype.

To continue reading: How “Nothing to Hide” Leads to “Nowhere to Hide” – Why Privacy Matters in an Age of Tech Totalitarianism

 

The Future of Artificial Intelligence: Why the Hype Has Outrun Reality, from the Wharton Global Forum

There is still a substantial information gap that must be closed before AI becomes a reality of many of the applications that have been hyped. That gap requires resources, effort, and times far greater than current promoters are promoting. From the Wharton Global Forum at knowledge.wharton.upenn.edu:

Robots that serve dinner, self-driving cars and drone-taxis could be fun and hugely profitable. But don’t hold your breath. They are likely much further off than the hype suggests.

A panel of experts at the recent 2017 Wharton Global Forum in Hong Kong outlined their views on the future for artificial intelligence (AI), robots, drones, other tech advances and how it all might affect employment in the future. The upshot was to deflate some of the hype, while noting the threats ahead posed to certain jobs.

Their comments came in a panel session titled, “Engineering the Future of Business,” with Wharton Dean Geoffrey Garrett moderating and speakers Pascale Fung, a professor of electronic and computer engineering at Hong Kong University of Science and Technology; Vijay Kumar, dean of engineering at the University of Pennsylvania, and Nicolas Aguzin, Asian-Pacific chairman and CEO for J.P.Morgan.

Kicking things off, Garrett asked: How big and disruptive is the self-driving car movement?

It turns out that so much of what appears in mainstream media about self-driving cars being just around the corner is very much overstated, said Kumar. Fully autonomous cars are many years away, in his view.

One of Kumar’s key points: Often there are two sides to high-tech advancements. One side gets a lot of media attention — advances in computing power, software and the like. Here, progress is quick — new apps, new companies and new products sprout up daily. However, the other, often-overlooked side deeply affects many projects — those where the virtual world must connect with the physical or mechanical world in new ways, noted Kumar, who is also a professor of mechanical engineering at Penn. Progress in that realm comes more slowly.

To continue reading: The Future of Artificial Intelligence: Why the Hype Has Outrun Reality

‘Artificial Intelligence’ was 2016’s fake news, by Andrew Orlowski

It will be a long time, probably never, before artificial intelligence replaces real human intelligence. From Andrew Orlowski at theregister.co.uk:

Putting the ‘AI’ into FAIL

“Fake news” vexed the media classes greatly in 2016, but the tech world perfected the art long ago. With “the internet” no longer a credible vehicle for Silicon Valley’s wild fantasies and intellectual bullying of other industries – the internet clearly isn’t working for people – “AI” has taken its place.

Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself into the mind of a three year old child, in order to be impressed.

For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would choose the correct answer, which is of course “none”.

Similarly, if you asked tech experts which recent theoretical or technical breakthrough could account for the rise in coverage of AI, even fewer would be able to answer correctly that “there hasn’t been one”.

As with the most cynical (or deranged) internet hypesters, the current “AI” hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they’re specialised and very limited in use. So not entirely useless, just vastly overhyped. As such, it more closely resembles “IoT”, where boring things happen quietly for years, rather than “Digital Transformation”, which means nothing at all.

To continue reading: ‘Artificial Intelligence’ was 2016’s fake news