The future of fake news: don’t believe everything you read, see or hear

A new breed of video and audio manipulation tools allow for the creation of realistic looking news footage, fake news, like the now infamous fake Obama speech.

In an age of Photoshop, filters and social media, many of us are used to seeing manipulated pictures – subjects become slimmer and smoother or, in the case of Snapchat, transformed into puppies.

The University of Washington’s Synthesizing Obama project took audio from one of Obama’s speeches and used it to animate his face in an entirely different video.

However, there’s a new breed of video and audio manipulation tools, made possible by advances in artificial intelligence and computer graphics, that will allow for the creation of realistic looking footage of public figures appearing to say, well, anything. Trump declaring his proclivity for water sports. Hillary Clinton describing the stolen children she keeps locked in her wine cellar. Tom Cruise finally admitting what we suspected all along … that he’s a Brony.

This is the future of fake news. We’ve long been told not to believe everything we read, but soon we’ll have to question everything we see and hear as well.

For now, there are several research teams working on capturing and synthesizing different visual and and audio elements of human behavior.

Software developed at Stanford University is able to manipulate video footage of public figures to allow a second person to put words in their mouth – in real time. Face2Face captures the second person’s facial expressions as they talk into a webcam and then morphs those movements directly onto the face of the person in the original video. The research team demonstrated their technology by puppeteering videos of George W Bush, Vladimir Putin and Donald Trump.

Face2Face lets you puppeteer celebrities and politicians, literally putting words in their mouths.

On its own, Face2Face is a fun plaything for creating memes and entertaining late night talk show hosts. However, with the addition of a synthesized voice, it becomes more convincing – not only does the digital puppet look like the politician, but it can also sound like the politician.

Canadian startup Lyrebird has developed similar capabilities, which it says can be used to turn text into on-the-spot audiobooks “read” by famous voices or for characters in video games.

Although their intentions may be well-meaning, voice-morphing technology could be combined with face-morphing technology to create convincing fake statements by public figures.

You only have to look at the University of Washington’s Synthesizing Obama project, where they took the audio from one of Obama’s speeches and used it to animate his face in an entirely different video with incredible accuracy (thanks to training a recurrent neural network with hours of footage), to get a sense of how insidious these adulterations can be.

Beyond fake news there are many other implications, said Nitesh Saxena, associate professor and research director of the University of Alabama at Birmingham’s department of computer science. “You could leave fake voice messages posing as someone’s mum. Or defame someone and post the audio samples online.”

Read the complete article on The Guardian web site.

 

Advertisements

Big data billionaire waging war on mainstream media

Robert Mercer in New York in 2014. Photograph: DDP USA/Rex Shutterstock

With links to Donald Trump, Steve Bannon and Nigel Farage, the rightwing US computer scientist Robert Mercer is at the heart of a multimillion-dollar propaganda network. Photograph: DDP USA/Rex Shutterstock

Carole Cadwalladr wrote this article on Robert Mercer, who funds a large propaganda network, for The Guardian newspaper. This is her story.

Just over a week ago, Donald Trump gathered members of the world’s press before him and told them they were liars. “The press, honestly, is out of control,” he said. “The public doesn’t believe you any more.” CNN was described as “very fake news… story after story is bad”. The BBC was “another beauty”.

That night I did two things. First, I typed “Trump” in the search box of Twitter. My feed was reporting that he was crazy, a lunatic, a raving madman. But that wasn’t how it was playing out elsewhere. The results produced a stream of “Go Donald!!!!”, and “You show ’em!!!” There were star-spangled banner emojis and thumbs-up emojis and clips of Trump laying into the “FAKE news MSM liars!”

Trump had spoken, and his audience had heard him. Then I did what I’ve been doing for two and a half months now. I Googled “mainstream media is…” And there it was. Google’s autocomplete suggestions: “mainstream media is… dead, dying, fake news, fake, finished”. Is it dead, I wonder? Has FAKE news won? Are we now the FAKE news? Is the mainstream media – we, us, I – dying?

I click Google’s first suggested link. It leads to a website called CNSnews.com and an article: “The Mainstream media are dead.” They’re dead, I learn, because they – we, I – “cannot be trusted”. How had it, an obscure site I’d never heard of, dominated Google’s search algorithm on the topic? In the “About us” tab, I learn CNSnews is owned by the Media Research Center, which a click later I learn is “America’s media watchdog”, an organisation that claims an “unwavering commitment to neutralising leftwing bias in the news, media and popular culture”.

Another couple of clicks and I discover that it receives a large bulk of its funding – more than $10m in the past decade – from a single source, the hedge fund billionaire Robert Mercer. If you follow US politics you may recognise the name. Robert Mercer is the money behind Donald Trump. But then, I will come to learn, Robert Mercer is the money behind an awful lot of things. He was Trump’s single biggest donor. Mercer started backing Ted Cruz, but when he fell out of the presidential race he threw his money – $13.5m of it – behind the Trump campaign.

It’s money he’s made as a result of his career as a brilliant but reclusive computer scientist. He started his career at IBM, where he made what the Association for Computational Linguistics called “revolutionary” breakthroughs in language processing – a science that went on to be key in developing today’s AI – and later became joint CEO of Renaissance Technologies, a hedge fund that makes its money by using algorithms to model and trade on the financial markets.

One of its funds, Medallion, which manages only its employees’ money, is the most successful in the world – generating $55bn so far. And since 2010, Mercer has donated $45m to different political campaigns – all Republican – and another $50m to non-profits – all rightwing, ultra-conservative. This is a billionaire who is, as billionaires are wont, trying to reshape the world according to his personal beliefs.

Robert Mercer very rarely speaks in public and never to journalists, so to gauge his beliefs you have to look at where he channels his money: a series of yachts, all called Sea Owl; a $2.9m model train set; climate change denial (he funds a climate change denial thinktank, the Heartland Institute); and what is maybe the ultimate rich man’s plaything – the disruption of the mainstream media. In this he is helped by his close associate Steve Bannon, Trump’s campaign manager and now chief strategist. The money he gives to the Media Research Center, with its mission of correcting “liberal bias” is just one of his media plays. There are other bigger, and even more deliberate strategies, and shining brightly, the star at the centre of the Mercer media galaxy, is Breitbart.

It was $10m of Mercer’s money that enabled Bannon to fund Breitbart – a rightwing news site, set up with the express intention of being a Huffington Post for the right. It has launched the careers of Milo Yiannopoulos and his like, regularly hosts antisemitic and Islamophobic views, and is currently being boycotted by more than 1,000 brands after an activist campaign. It has been phenomenally successful: the 29th most popular site in America with 2bn page views a year. It’s bigger than its inspiration, the Huffington Post, bigger, even, than PornHub. It’s the biggest political site on Facebook. The biggest on Twitter.

Prominent rightwing journalist Andrew Breitbart, who founded the site but died in 2012, told Bannon that they had “to take back the culture”. And, arguably, they have, though American culture is only the start of it. In 2014, Bannon launched Breitbart London, telling the New York Times it was specifically timed ahead of the UK’s forthcoming election. It was, he said, the latest front “in our current cultural and political war”. France and Germany are next.

But there was another reason why I recognised Robert Mercer’s name: because of his connection to Cambridge Analytica, a small data analytics company. He is reported to have a $10m stake in the company, which was spun out of a bigger British company called SCL Group. It specialises in “election management strategies” and “messaging and information operations”, refined over 25 years in places like Afghanistan and Pakistan. In military circles this is known as “psyops” – psychological operations. (Mass propaganda that works by acting on people’s emotions.)

Cambridge Analytica worked for the Trump campaign and, so I’d read, the Leave campaign. When Mercer supported Cruz, Cambridge Analytica worked with Cruz. When Robert Mercer started supporting Trump, Cambridge Analytica came too. And where Mercer’s money is, Steve Bannon is usually close by: it was reported that until recently he had a seat on the board.

Last December, I wrote about Cambridge Analytica in a piece about how Google’s search results on certain subjects were being dominated by rightwing and extremist sites. Jonathan Albright, a professor of communications at Elon University, North Carolina, who had mapped the news ecosystem and found millions of links between rightwing sites “strangling” the mainstream media, told me that trackers from sites like Breitbart could also be used by companies like Cambridge Analytica to follow people around the web and then, via Facebook, target them with ads.

On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters – its USP is to use this data to understand people’s deepest emotions and then target them accordingly. The system, according to Albright, amounted to a “propaganda machine”.

A few weeks later, the Observer received a letter. Cambridge Analytica was not employed by the Leave campaign, it said. Cambridge Analytica “is a US company based in the US. It hasn’t worked in British politics.”

Which is how, earlier this week, I ended up in a Pret a Manger near Westminster with Andy Wigmore, Leave.EU’s affable communications director, looking at snapshots of Donald Trump on his phone. It was Wigmore who orchestrated Nigel Farage’s trip to Trump Tower – the PR coup that saw him become the first foreign politician to meet the president elect.

Wigmore scrolls through the snaps on his phone. “That’s the one I took,” he says pointing at the now globally famous photo of Farage and Trump in front of his golden elevator door giving the thumbs-up sign. Wigmore was one of the “bad boys of Brexit” – a term coined by Arron Banks, the Bristol-based businessman who was Leave.EU’s co-founder.

Cambridge Analytica had worked for them, he said. It had taught them how to build profiles, how to target people and how to scoop up masses of data from people’s Facebook profiles. A video on YouTube shows one of Cambridge Analytica’s and SCL’s employees, Brittany Kaiser, sitting on the panel at Leave.EU’s launch event.

Facebook was the key to the entire campaign, Wigmore explained. A Facebook ‘like’, he said, was their most “potent weapon”. “Because using artificial intelligence, as we did, tells you all sorts of things about that individual and how to convince them with what sort of advert. And you knew there would also be other people in their network who liked what they liked, so you could spread. And then you follow them. The computer never stops learning and it never stops monitoring.”

It sounds creepy, I say.

“It is creepy! It’s really creepy! It’s why I’m not on Facebook! I tried it on myself to see what information it had on me and I was like, ‘Oh my God!’ What’s scary is that my kids had put things on Instagram and it picked that up. It knew where my kids went to school.”

They hadn’t “employed” Cambridge Analytica, he said. No money changed hands. “They were happy to help.”

Why?

“Because Nigel is a good friend of the Mercers. And Robert Mercer introduced them to us. He said, ‘Here’s this company we think may be useful to you.’ What they were trying to do in the US and what we were trying to do had massive parallels. We shared a lot of information. Why wouldn’t you?” Behind Trump’s campaign and Cambridge Analytica, he said, were “the same people. It’s the same family.”

There were already a lot of questions swirling around Cambridge Analytica, and Andy Wigmore has opened up a whole lot more. Such as: are you supposed to declare services-in-kind as some sort of donation? The Electoral Commission says yes, if it was more than £7,500. And was it declared? The Electoral Commission says no. Does that mean a foreign billionaire had possibly influenced the referendum without that influence being apparent? It’s certainly a question worth asking.

In the last month or so, articles in first the Swiss and the US press have asked exactly what Cambridge Analytica is doing with US voters’ data. In a statement to the Observer, the Information Commissioner’s Office said: “Any business collecting and using personal data in the UK must do so fairly and lawfully. We will be contacting Cambridge Analytica and asking questions to find out how the company is operating in the UK and whether the law is being followed.”

Cambridge Analytica said last Friday they are in touch with the ICO and are completely compliant with UK and EU data laws. It did not answer other questions the Observer put to it this week about how it built its psychometric model, which owes its origins to original research carried out by scientists at Cambridge University’s Psychometric Centre, research based on a personality quiz on Facebook that went viral. More than 6 million people ended up doing it, producing an astonishing treasure trove of data.

These Facebook profiles – especially people’s “likes” – could be correlated across millions of others to produce uncannily accurate results. Michal Kosinski, the centre’s lead scientist, found that with knowledge of 150 likes, their model could predict someone’s personality better than their spouse. With 300, it understood you better than yourself. “Computers see us in a more robust way than we see ourselves,” says Kosinski.

But there are strict ethical regulations regarding what you can do with this data. Did SCL Group have access to the university’s model or data, I ask Professor Jonathan Rust, the centre’s director? “Certainly not from us,” he says. “We have very strict rules around this.”

A scientist, Aleksandr Kogan, from the centre was contracted to build a model for SCL, and says he collected his own data. Professor Rust says he doesn’t know where Kogan’s data came from. “The evidence was contrary. I reported it.” An independent adjudicator was appointed by the university. “But then Kogan said he’d signed a non-disclosure agreement with SCL and he couldn’t continue [answering questions].”

Kogan disputes this and says SCL satisfied the university’s inquiries. But perhaps more than anyone, Professor Rust understands how the kind of information people freely give up to social media sites could be used.

Read the other half of her story in The Guardian newspaper site here.

 

DARPA seeks gamers input for AI military drones

According to a recent press release from DARPA they seek gamers input in various military situations involving drones, which ties in nicely with DARPA’s AI research and may soon lead to AI warfare.

DARPA’s Artificial Intelligence (AI) research is called “Explainable Artificial Intelligence (XAI)”. Their article on XAI states, “The Explainable AI (XAI) program aims to create a suite of machine learning techniques that:

  • Produce more explainable models, while maintaining a high level of learning performance (prediction accuracy); and
  • Enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners.”
XAI Concept

XAI Concept

Now take XAI and use that for controlling drones and it won’t be long before XAI or its successors will only need someone sitting in the continental USA, say Denver, to control an army of drones smart enough to do battle anywhere. Or maybe not need any human involvement at all.

Recently, DARPA asked gamers to play along with DARPA’s most recent “game”.

The OFFensive Swarm Enabled Tactics Program, or OFFSET, seeks to help foot soldiers wield the rapidly advancing power of drone swarms, and that starts with creating a controller — “An advanced human-swarm interface to enable users to monitor and direct potentially hundreds of unmanned platforms simultaneously in real time,” as the Defense Advanced Research Projects Agency, or DARPA, put it on Wednesday.

offseta-619-316

“If we’re successful, this work could also bring entirely new scalable, dynamic capabilities to the battlefield, such as distributed perception, robust and resilient communications, dispersed computing and analytics, and adaptive collective behaviors,” program manager Timothy Chung, said in the DARPA statement.

OFFSET aims to demonstrate its technologies through frequent live experiments with various unmanned air and ground platforms. Every six months, operational vignettes of progressively increasing complexity would challenge both the swarm architecture and the developed swarm tactics across numerous technological and operational test variables, such as swarm size, proportion of air and ground platforms, and mission duration. Users would employ the swarm interface to test the best of the virtual tactics in the real world, and interactively supply their unmanned platforms with near-real-time tactics updates using automated deployment technologies.

With the proper integration of XAI and robotics the human aspect of war will be changed. Fewer military personnel needed, loss of life lessened, human involvement in killing each other lessened. End result will be a dehumanized war. Just a game.

Save