The biggest whoppers, the most egregious “fake news.” comes from governments, so when the powerful Pentagon takes an interest in combatting “fake news,” it’s a sure thing that the Defense Department isn’t after the truth. From Matt Taibbi at rollingstone.com:
If there’s a worse idea than the Pentagon becoming Editor-in-Chief of America, I can’t remember it. But we’re getting there:
From Bloomberg over Labor Day weekend:
Fake news and social media posts are such a threat to U.S. security that the Defense Department is launching a project to repel “large-scale, automated disinformation attacks,” as the top Republican in Congress blocks efforts to protect the integrity of elections.
One of the Pentagon’s most secretive agencies, the Defense Advanced Research Projects Agency (DARPA), is developing “custom software that can unearth fakes hidden among more than 500,000 stories, photos, video and audio clips.”
Once upon a time, when progressives still reflexively distrusted the military, DARPA was a liberal punchline, known for helping invent the Internet but also for developing lunatic privacy-invading projects like LifeLog, a program to “gather in a single place just about everything an individual says, sees, or does.”
DARPA now is developing a semantic analysis program called “SemaFor” and an image analysis program called “MediFor,” ostensibly designed to prevent the use of fake images or text. The idea would be to develop these technologies to help private Internet providers sift through content.
It’s the latest in a string of stories about new methods of control over information flow that should, but for some reason do not, horrify every working journalist.
From the Senate dragging Internet providers to the Hill to demand strategies against the sowing of “discord,” to tales of hundreds of Facebook sites zapped for “coordinated inauthentic behavior” following advice by government-connected groups like the Atlantic Council, it’s been clear the future of the information landscape is going to involve elaborate new forms of algorithmic regulation.
Stories about the need for such technologies are always couched as responses to the “fake news” problem. Unfortunately, “fake news” is a poorly-defined, amorphous concept that the public has been trained to fear without really understanding.