Tag Archives: Artificial intelligence

Artificial Intelligence Or Real Stupidity? by David Robertson

Artificial intelligence has been way overhyped. From David Robertson at realinvestmentadvice.com:

It’s hard to go anywhere these days without coming across some mention of artificial intelligence (AI). You hear about it, you read about it and it’s hard to find a presentation deck (on any subject) that doesn’t mention it. There is no doubt there is a lot of hype around the subject.

While the hype does increase awareness of AI, it also facilitates some pretty silly activities and can distract people from much of the real progress being made. Disentangling the reality from the more dramatic headlines promises to provide significant advantages for investors, business people and consumers alike.

Artificial intelligence has gained its recent notoriety in large part due to high profile successes such as IBM’s Watson winning at Jeopardy and Google’s AlphaGo beating the world champion at the game “Go”. Waymo, Tesla and others have also made great strides with self-driving vehicles. The expansiveness of AI applications was captured by Richard Waters in the Financial Times [here}: “If there was a unifying message underpinning the consumer technology on display [at the Consumer Electronics Show] … it was: ‘AI in everything’.”

High profile AI successes have also captured people’s imaginations to such a degree that they have prompted other far reaching efforts. One instructive example was documented by Thomas H. Davenport and Rajeev Ronanki in the Harvard Business Review [here]. They describe, “In 2013, the MD Anderson Cancer Center launched a ‘moon shot’ project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system.” Unfortunately, the system didn’t work and by 2017, “the project was put on hold after costs topped $62 million—and the system had yet to be used on patients.”

Continue reading

Advertisements

The Real Reason Why Globalists Are So Obsessed With Artificial Intelligence, by Brandon Smith

Brandon Smith makes an incisive observation: the globalists like artificial intelligence because it’s just like them: artificially and soullessly intelligent. From Smith at alt-market.com:

It is nearly impossible to traverse web news or popular media today without being assaulted by vast amounts of propaganda on Artificial Intelligence (AI). It is perhaps the fad to end all fads as it supposedly encompasses almost every aspect of human existence, from economics and security to philosophy and art. According to mainstream claims, AI can do almost everything and do it better than any human being. And, the things AI can’t do, it WILL be able to do eventually.

Whenever the establishment attempts to saturate the media with a particular narrative, it is usually with the intent to manipulate public perception in a way that produces self fulfilling prophecy. In other words, they hope to shape reality by telling a particular lie so often it becomes accepted by the masses over time as fact. They do this with the idea of globalism as inevitable, with the junk science of climate change as “undeniable” and they do it with AI as a technological necessity.

The globalists have long held AI as a kind of holy grail in centralization technology. The United Nations has adopted numerous positions and even summits on the issue, including the “AI For Good” summit in Geneva. The UN insinuates that its primary interest in AI is in regulation or observation of how it is exploited, but the UN also has clear goals to use AI to its advantage. The use of AI as a means to monitor mass data to better institute “sustainable development” is written clearly in the UN’s agenda.

The IMF is also in on the AI trend, holding global discussions on the uses of AI in economics as well as the effects of algorithms on economic analysis.

The main source for the development of AI has long been DARPA. The military and globalist think tank dumps billions of dollars into the technology, making AI the underlying focus of most of DARPA’s work. AI is not only on the globalist’s radar; they are essentially spearheading the creation and promotion of it.

The globalist desire for the technology is not as simple as some might assume, however. They have strategic reasons, but also religious reasons for placing AI on an ideological pedestal. But first I suppose we should tackle the obvious.

Continue reading

Artificial Intelligence Will Kill Us All. Unless… by John Hunt, M.D.

Whether AI is a force for evil or good will depend on who’s teaching it. From John Hunt, M.D., at international man.com:

The usual suspects are demanding government regulation of AI. They say that government must defend us all from the misuse of AI by the profit-seekers.

In my view, however, the only thing worse than the government sticking its nose into AI is if we have AI learn by mimicking the behavior of serial killers.

Although most known for their #1 best-selling book, Life Extension: A Practical Scientific Approach, Durk Pearson and Sandy Shaw are the two most broadly intelligent and well-informed people I have encountered in my life. They are rocket scientists (Durk literally is). This is what I learned from Durk and Sandy about AI:

AI learns by watching and mimicking people.

An AI will be extremely effective at whatever it learns. If it observes and mimics good people—ethical people—an AI will be really good. If it learns from bad people—by mimicking unethical people—an AI will be unconscionably evil.

If we allow government (politicians and bureaucrats) to regulate AI, then who will AI be exposed to and learn to emulate? The answer is: politicians and bureaucrats.

Continue reading

Doug Casey on Virtual Girlfriends

There’s nothing like true love with a humanoid. Sex and you don’t need to lie the next morning. From Doug Casey at caseyresearch.com:

Justin: Doug, I recently read an article about a U.S. company that’s offering a digital “girlfriend experience.”

3D Hologroup has created an app that allows you to download virtual girlfriends. And you can interact with these girls if you own an augmented reality device.

So, I visited the company’s website. I discovered that you can choose which model of girl you’d like, just like you would a pair of shoes.

It reminded me of the hologram girlfriend that Ryan Gosling’s character had in Blade Runner 2049, which came out last year.

What do you make of this? Are you surprised that you can now buy a digital girlfriend with just a few clicks of a mouse?

Doug: It’s a vision of things to come. I don’t think most people know that this is happening. But it’s an inevitable implication of Moore’s Law, the observation made in 1965 by Gordon Moore that computer power would double about every two years. But it’s not just computers; technology is advancing at that rate in a number of areas. Augmented reality is just one example. Artificial intelligence, robotics, genetic engineering, and nanotech are also advancing extremely rapidly.

It seems to me that we’re likely to see the Singularity within the next generation, just as Ray Kurzweil predicted. Among many other strange things, we’ll have humanoids and androids that will be increasingly hard to distinguish from actual people.

You’ll also be able to have your own Mr. (or Miss) Data, the android from Star Trek: The Next Generation. Albeit a relatively low-functioning version. This will have immense societal implications on how society will function, and how people relate to each other.

Of course, that’s 20 or 25 years from now, and there will be many steps along the way. But one thing you won’t have to wait long for is artificial reality suits. You’ll be able to step into one and experience an alternate reality: sight, smell, touch, hearing, and even taste I suppose. It will be vastly more involving than watching a movie…

To continue reading: Doug Casey on Virtual Girlfriends

Humans Need Not Apply: AI to Take Over Customer Service Jobs, by Don Quijones

Virtual customer service agents may soon be not only smarter than humans, but friendlier and more empathetic. From Don Quijones at wolfstreet.com:

“With Amelia, we graduate into automating the knowledge worker, the customer service agent.”

The last ten years have been a rough time for many bank employees in Spain. The country’s lenders have laid off89,500 workers on the back of narrowing margins, industry consolidation, mass closures of branches and gathering digitization. In 2008, when the financial crisis struck, Spain was home to some 278,000 banking professionals; today there are just 195,000. Another 3,000 redundancies are expected in the coming months, as Santander and Bankia plan to further streamline their businesses, pushing the total number of layoffs close to 95,000.

The job losses are unlikely to end there. In fact, they could accelerate, especially if a potential new threat to traditional branch and front-office jobs materializes: artificial intelligence (AI). As Finextra reports, BBVA, Spain’s second largest banking group, is on the verge of enlisting AI “agent” Amelia, developed by New York-based IPsoft, for many of its customer support functions:

BBVA has become the latest bank to employ Amelia, calling in the virtual assistant’s creator IPsoft to help develop AI-powered digital customer support services. The technology has already been trialled at BBVA’s call centre in Mexico to address customer complaints and enquiries. Now it will be extended to other markets and areas, as the bank seeks to digitise sales, advisory and support services.

Amelia is capable of detecting and adapting to caller’s emotions, as well as making decisions in real time, and can even suggest improvements to the processes for which ‘she’ has been trained.

Javier Díaz, CEO, IPsoft for Spain and Latin America, says: “Amelia is the result of 20 years of research during which we have tried to emulate the way the human brain works.”

It appears to be working. Amelia’s marquee clients already include around 20 Fortune 100 firms. The company is also in the process of developing pre-trained, limited-function mini-Amelias for small and medium-size businesses.

To continue reading: Humans Need Not Apply: AI to Take Over Customer Service Jobs

What Makes AI Dangerous? The State, by Per Bylund

As with most new technologies that have both benefits and dangers, the dangers are most pronounced when the technology gets in the hands of governments. From Per Bylund at mises.org:

So I watched “Do you trust this computer?”, a film that “explores the promises and perils” of artificial intelligence. While it notes both the good and the bad, it has an obvious focus on how AI might bring about “the end of the world as we know it” (TEOTWAWKI.) That is, if it is left unregulated.

It’s strange, however, that the examples of TEOTWAWKI AI were “autonomous weapons” and “fake news,” the latter because of how it can provide a path for a minority-supported dictator to “take over.” While I understand (and fear) both, the examples have one thing in common – but it is not AI.

That one thing is the State. Only States’ militaries and groups looking to take over a State have any interest in “killer robots.” They’re also developed by/for those groups. The fake news and “undue influence” issue is also about the power over the State. Neither weapons nor fake news require AI. Yet, in some strange twist, the film makers make it an AI problem. Worse: they end the film indicating that the main problem is that AI is “unregulated.”

But this is completely illogical: with the State as the problem’s common denominator *and* the solution?

Instead, we’re led to believe that it is problematic that Google tracks our web searches and Facebook knows our friends and beliefs (“because autonomous weapons”?). While I agree that it is ugly, neither company is making a claim over life and death. In fact, they operate under the harshest regulation there is: the market. Because they are making investments to make money, and money can only be made in one of two ways: through offering something that people want and are willing to pay for (Oppenheimer’s “economic” means), or through simply taking it from people against their will (“political” means). Companies operate according to the former, which means they are subject to the mercy of consumers. The State operate according to the latter.

 

To continue reading: What Makes AI Dangerous? The State

Wells Fargo’s Artificial Intelligence Defies Analysts, Slaps “Sell” on Google and Facebook, by Wolf Richter

SLL’s bet is on the artificial, rather than the human, intelligence. From Wolf Richter at wolfstreet.com:

Google, which makes almost all of its money on ads and internet user data, is undertaking herculean efforts to get a grip on artificial intelligence (AI). It’s trying to develop software that allows machines to think and learn like humans. It’s spending enormous resources on it. This includes the $525 million acquisition in 2014 of DeepMind, which is said to have lost an additional $162 million in 2016. Google is trying to load smartphones with AI and come up with AI smart speakers and other gadgets, and ultimately AI systems that control self-driving cars.

Facebook, which also makes most of its money on ads and user data, is on a similar trajectory, but spreading into other directions, including a “creepy” run-in with two of its bots that were supposed to negotiate with each other but ended up drifting off human language and invented their own languagethat humans couldn’t understand.

And here comes an AI bot developed by stock analysts at Wells Fargo Securities. The human analysts have an “outperform” rating on Google’s parent Alphabet and on Facebook. They worked with a data scientist at Amazon’s Alexa project to create the AI bot. And after six months of work, the AI bot was allowed to do its job. According to their note to clients on Friday, reported by Bloomberg, the AI bot promptly slapped a “sell” rating on Google and Facebook.

Human analysts on Wall Street are famous for their incessantly optimistic ratings and outlooks. They generally only put a “sell” on a stock after it has already plunged. They’re part of Wall Street’s human hype machine. Their job is to help inflate stock prices and make CEOs feel good so that they will do business with the analysts’ firms and send fees their way. But Wells Fargo’s AI bot hasn’t gotten the memo.

Last month, a group led by Ken Sena, head of Global Internet Analyst at Wells Fargo Securities, introduced this “artificially intelligent equity research analyst” or AIERA. Its “primary purpose is to track stocks and formulate a daily, weekly, and overall view on whether the stocks tracked will go up or down,” Sena, said at the time.

So “she” did Big Data analysis of Alphabet, Facebook, and some other stocks, and after seeing what’s there, averted her eyes in disgust and slapped a “sell” recommendation on both stocks and a “hold” recommendation on 11 other cherished stocks.

To continue reading; Wells Fargo’s Artificial Intelligence Defies Analysts, Slaps “Sell” on Google and Facebook