Tag Archives: Artificial intelligence

Doug Casey on Virtual Girlfriends

There’s nothing like true love with a humanoid. Sex and you don’t need to lie the next morning. From Doug Casey at caseyresearch.com:

Justin: Doug, I recently read an article about a U.S. company that’s offering a digital “girlfriend experience.”

3D Hologroup has created an app that allows you to download virtual girlfriends. And you can interact with these girls if you own an augmented reality device.

So, I visited the company’s website. I discovered that you can choose which model of girl you’d like, just like you would a pair of shoes.

It reminded me of the hologram girlfriend that Ryan Gosling’s character had in Blade Runner 2049, which came out last year.

What do you make of this? Are you surprised that you can now buy a digital girlfriend with just a few clicks of a mouse?

Doug: It’s a vision of things to come. I don’t think most people know that this is happening. But it’s an inevitable implication of Moore’s Law, the observation made in 1965 by Gordon Moore that computer power would double about every two years. But it’s not just computers; technology is advancing at that rate in a number of areas. Augmented reality is just one example. Artificial intelligence, robotics, genetic engineering, and nanotech are also advancing extremely rapidly.

It seems to me that we’re likely to see the Singularity within the next generation, just as Ray Kurzweil predicted. Among many other strange things, we’ll have humanoids and androids that will be increasingly hard to distinguish from actual people.

You’ll also be able to have your own Mr. (or Miss) Data, the android from Star Trek: The Next Generation. Albeit a relatively low-functioning version. This will have immense societal implications on how society will function, and how people relate to each other.

Of course, that’s 20 or 25 years from now, and there will be many steps along the way. But one thing you won’t have to wait long for is artificial reality suits. You’ll be able to step into one and experience an alternate reality: sight, smell, touch, hearing, and even taste I suppose. It will be vastly more involving than watching a movie…

To continue reading: Doug Casey on Virtual Girlfriends

Advertisements

Humans Need Not Apply: AI to Take Over Customer Service Jobs, by Don Quijones

Virtual customer service agents may soon be not only smarter than humans, but friendlier and more empathetic. From Don Quijones at wolfstreet.com:

“With Amelia, we graduate into automating the knowledge worker, the customer service agent.”

The last ten years have been a rough time for many bank employees in Spain. The country’s lenders have laid off89,500 workers on the back of narrowing margins, industry consolidation, mass closures of branches and gathering digitization. In 2008, when the financial crisis struck, Spain was home to some 278,000 banking professionals; today there are just 195,000. Another 3,000 redundancies are expected in the coming months, as Santander and Bankia plan to further streamline their businesses, pushing the total number of layoffs close to 95,000.

The job losses are unlikely to end there. In fact, they could accelerate, especially if a potential new threat to traditional branch and front-office jobs materializes: artificial intelligence (AI). As Finextra reports, BBVA, Spain’s second largest banking group, is on the verge of enlisting AI “agent” Amelia, developed by New York-based IPsoft, for many of its customer support functions:

BBVA has become the latest bank to employ Amelia, calling in the virtual assistant’s creator IPsoft to help develop AI-powered digital customer support services. The technology has already been trialled at BBVA’s call centre in Mexico to address customer complaints and enquiries. Now it will be extended to other markets and areas, as the bank seeks to digitise sales, advisory and support services.

Amelia is capable of detecting and adapting to caller’s emotions, as well as making decisions in real time, and can even suggest improvements to the processes for which ‘she’ has been trained.

Javier Díaz, CEO, IPsoft for Spain and Latin America, says: “Amelia is the result of 20 years of research during which we have tried to emulate the way the human brain works.”

It appears to be working. Amelia’s marquee clients already include around 20 Fortune 100 firms. The company is also in the process of developing pre-trained, limited-function mini-Amelias for small and medium-size businesses.

To continue reading: Humans Need Not Apply: AI to Take Over Customer Service Jobs

What Makes AI Dangerous? The State, by Per Bylund

As with most new technologies that have both benefits and dangers, the dangers are most pronounced when the technology gets in the hands of governments. From Per Bylund at mises.org:

So I watched “Do you trust this computer?”, a film that “explores the promises and perils” of artificial intelligence. While it notes both the good and the bad, it has an obvious focus on how AI might bring about “the end of the world as we know it” (TEOTWAWKI.) That is, if it is left unregulated.

It’s strange, however, that the examples of TEOTWAWKI AI were “autonomous weapons” and “fake news,” the latter because of how it can provide a path for a minority-supported dictator to “take over.” While I understand (and fear) both, the examples have one thing in common – but it is not AI.

That one thing is the State. Only States’ militaries and groups looking to take over a State have any interest in “killer robots.” They’re also developed by/for those groups. The fake news and “undue influence” issue is also about the power over the State. Neither weapons nor fake news require AI. Yet, in some strange twist, the film makers make it an AI problem. Worse: they end the film indicating that the main problem is that AI is “unregulated.”

But this is completely illogical: with the State as the problem’s common denominator *and* the solution?

Instead, we’re led to believe that it is problematic that Google tracks our web searches and Facebook knows our friends and beliefs (“because autonomous weapons”?). While I agree that it is ugly, neither company is making a claim over life and death. In fact, they operate under the harshest regulation there is: the market. Because they are making investments to make money, and money can only be made in one of two ways: through offering something that people want and are willing to pay for (Oppenheimer’s “economic” means), or through simply taking it from people against their will (“political” means). Companies operate according to the former, which means they are subject to the mercy of consumers. The State operate according to the latter.

 

To continue reading: What Makes AI Dangerous? The State

Wells Fargo’s Artificial Intelligence Defies Analysts, Slaps “Sell” on Google and Facebook, by Wolf Richter

SLL’s bet is on the artificial, rather than the human, intelligence. From Wolf Richter at wolfstreet.com:

Google, which makes almost all of its money on ads and internet user data, is undertaking herculean efforts to get a grip on artificial intelligence (AI). It’s trying to develop software that allows machines to think and learn like humans. It’s spending enormous resources on it. This includes the $525 million acquisition in 2014 of DeepMind, which is said to have lost an additional $162 million in 2016. Google is trying to load smartphones with AI and come up with AI smart speakers and other gadgets, and ultimately AI systems that control self-driving cars.

Facebook, which also makes most of its money on ads and user data, is on a similar trajectory, but spreading into other directions, including a “creepy” run-in with two of its bots that were supposed to negotiate with each other but ended up drifting off human language and invented their own languagethat humans couldn’t understand.

And here comes an AI bot developed by stock analysts at Wells Fargo Securities. The human analysts have an “outperform” rating on Google’s parent Alphabet and on Facebook. They worked with a data scientist at Amazon’s Alexa project to create the AI bot. And after six months of work, the AI bot was allowed to do its job. According to their note to clients on Friday, reported by Bloomberg, the AI bot promptly slapped a “sell” rating on Google and Facebook.

Human analysts on Wall Street are famous for their incessantly optimistic ratings and outlooks. They generally only put a “sell” on a stock after it has already plunged. They’re part of Wall Street’s human hype machine. Their job is to help inflate stock prices and make CEOs feel good so that they will do business with the analysts’ firms and send fees their way. But Wells Fargo’s AI bot hasn’t gotten the memo.

Last month, a group led by Ken Sena, head of Global Internet Analyst at Wells Fargo Securities, introduced this “artificially intelligent equity research analyst” or AIERA. Its “primary purpose is to track stocks and formulate a daily, weekly, and overall view on whether the stocks tracked will go up or down,” Sena, said at the time.

So “she” did Big Data analysis of Alphabet, Facebook, and some other stocks, and after seeing what’s there, averted her eyes in disgust and slapped a “sell” recommendation on both stocks and a “hold” recommendation on 11 other cherished stocks.

To continue reading; Wells Fargo’s Artificial Intelligence Defies Analysts, Slaps “Sell” on Google and Facebook

How “Nothing to Hide” Leads to “Nowhere to Hide” – Why Privacy Matters in an Age of Tech Totalitarianism, from the Daily Bell

Your life may be on of pristine purity and goodness, but though you may think you have nothing to hide, you may still not want to give up your privacy. From the Daily Bell at dailybell.com:

Editor’s note: The following comes from a longtime journalist who specializes in writing for major media outlets and private companies about robots, Big Data, and Artificial Intelligence (AI).

Would you allow a government official into your bedroom on your honeymoon? Or let your mother-in-law hear and record every conversation that takes place in your home or car – especially disagreements with your husband or wife? Would you let a stranger sit in on your children’s playdates so that he could better understand how to entice them with candy or a doll?

Guess what? If you bring your phone with you everywhere, or engage with a whole-house robo helper such as Alexa or Echo or Siri or Google, you’re opening up every aspect of your life to government officials, snooping (possibly criminal) hackers, and advertisers targeting you, your spouse and your children.

The following is not a screed against technology. But it is a plea to consider what we’re giving up when we hand over privacy, wholesale, to people whom we can neither see nor hear… people whose motives we cannot fathom.

The widened lanes of communication, and the conveniences that Smart Phones, wireless communities, Big Data and Artificial Intelligence (AI) have fomented are indeed helpful to some extent. They allow, for example, for remote working, which allows people to spend more time with their families and less time commuting. In areas such as the energy business, the field of predictive analytics, born of Big Data and the Industrial Internet, helps mitigate the danger of sending humans to oil rigs at sea. 

And on a personal level, of course, the conveniences are innumerable: Grandparentsliving far away can “see” their grandchildren more often than they could in years past, thanks to technology such as FaceTime and Skype.

To continue reading: How “Nothing to Hide” Leads to “Nowhere to Hide” – Why Privacy Matters in an Age of Tech Totalitarianism

 

The Future of Artificial Intelligence: Why the Hype Has Outrun Reality, from the Wharton Global Forum

There is still a substantial information gap that must be closed before AI becomes a reality of many of the applications that have been hyped. That gap requires resources, effort, and times far greater than current promoters are promoting. From the Wharton Global Forum at knowledge.wharton.upenn.edu:

Robots that serve dinner, self-driving cars and drone-taxis could be fun and hugely profitable. But don’t hold your breath. They are likely much further off than the hype suggests.

A panel of experts at the recent 2017 Wharton Global Forum in Hong Kong outlined their views on the future for artificial intelligence (AI), robots, drones, other tech advances and how it all might affect employment in the future. The upshot was to deflate some of the hype, while noting the threats ahead posed to certain jobs.

Their comments came in a panel session titled, “Engineering the Future of Business,” with Wharton Dean Geoffrey Garrett moderating and speakers Pascale Fung, a professor of electronic and computer engineering at Hong Kong University of Science and Technology; Vijay Kumar, dean of engineering at the University of Pennsylvania, and Nicolas Aguzin, Asian-Pacific chairman and CEO for J.P.Morgan.

Kicking things off, Garrett asked: How big and disruptive is the self-driving car movement?

It turns out that so much of what appears in mainstream media about self-driving cars being just around the corner is very much overstated, said Kumar. Fully autonomous cars are many years away, in his view.

One of Kumar’s key points: Often there are two sides to high-tech advancements. One side gets a lot of media attention — advances in computing power, software and the like. Here, progress is quick — new apps, new companies and new products sprout up daily. However, the other, often-overlooked side deeply affects many projects — those where the virtual world must connect with the physical or mechanical world in new ways, noted Kumar, who is also a professor of mechanical engineering at Penn. Progress in that realm comes more slowly.

To continue reading: The Future of Artificial Intelligence: Why the Hype Has Outrun Reality

‘Artificial Intelligence’ was 2016’s fake news, by Andrew Orlowski

It will be a long time, probably never, before artificial intelligence replaces real human intelligence. From Andrew Orlowski at theregister.co.uk:

Putting the ‘AI’ into FAIL

“Fake news” vexed the media classes greatly in 2016, but the tech world perfected the art long ago. With “the internet” no longer a credible vehicle for Silicon Valley’s wild fantasies and intellectual bullying of other industries – the internet clearly isn’t working for people – “AI” has taken its place.

Almost everything you read about AI is fake news. The AI coverage comes from a media willing itself into the mind of a three year old child, in order to be impressed.

For example, how many human jobs did AI replace in 2016? If you gave professional pundits a multiple choice question listing these three answers: 3 million, 300,000 and none, I suspect very few would choose the correct answer, which is of course “none”.

Similarly, if you asked tech experts which recent theoretical or technical breakthrough could account for the rise in coverage of AI, even fewer would be able to answer correctly that “there hasn’t been one”.

As with the most cynical (or deranged) internet hypesters, the current “AI” hype has a grain of truth underpinning it. Today neural nets can process more data, faster. Researchers no longer habitually tweak their models. Speech recognition is a good example: it has been quietly improving for three decades. But the gains nowhere match the hype: they’re specialised and very limited in use. So not entirely useless, just vastly overhyped. As such, it more closely resembles “IoT”, where boring things happen quietly for years, rather than “Digital Transformation”, which means nothing at all.

To continue reading: ‘Artificial Intelligence’ was 2016’s fake news