Facebook employee’s warning
An employee who was leaving Facebook in late 2020. Warned his former colleagues that the company’s reliance on AI won’t save it from the pressuring issue. In the farewell note, called a ‘badge post” internally at Facebook. The employee said that the company wants to oversee all human discourse. Using “perfect, fair, omniscient robots owned by Mark Zuckerberg”.
They also land in a position to say that reality is “clearly a dystopia. But it is also so deeply ingrain we hardly notice it anymore. They also say the more Facebook tries to solve human problems with engineering solutions. The future grows nearer. “We need more radical humanists to interrogate these assumptions. Explain why life can’t be sanitize, and disrupt further attempts to centralize power,” the employee wrote.
Photos of the posts were included in disclosures made to the Securities and Exchange Commissio. And provided to Congress in redacted form by whistleblower Frances Haugen’s legal counsel. A consortium of news organizations, including The Atlantic and The New York Times. Obtained the redacted versions received by Congress, now known as the “Facebook Papers. Insider has also since obtained copies.
Facebook representatives have said the documents do not paint the entire picture of the company’s business development investments. Internal research, and efforts to mitigate harm.
Facebook uses a mix of both human and automated systems to sift through content on its social platform. That could be harmful. It also uses computers to help decide what kinds of content to show users. One such downfall of that decision though has been computer-driven systems promoting angry. Divisive, sensationalistic posts that contain misinformation, the Facebook Papers have shown.
Facebook’s reliance on AI was a central focus of Haugen’s testimony in front of a Congressional committee earlier this month.
“I strongly encourage reforms that push us human-scale social media and not computer-driven social media,” she told lawmakers. “Those amplification harms are caused by computers choosing what’s important to us, not friends and family.”
Facebook AI research challenges
Facebook today announced Ego4D, a long-term project aimed at solving AI research challenges in “egocentric perception,” or first-person views. The goal is to teach AI systems to comprehend and interact with the world like humans do as opposed to in the third-person, omniscient way that most AI currently does.
It’s Facebook’s assertion that AI that understands the world from first-person could enable previously impossible augmented and virtual reality (AR/VR) experiences. But computer vision models, which would form the basis of this AI, have historically learned from millions of photos and videos captured in third-person.
Next-generation AI systems might need to learn from a different kind of data — videos that show the world from the centre of the action — to achieve truly egocentric perception, Facebook says. To that end, Ego4D brings together a consortium of universities and labs across nine countries, which collected more than 2,200 hours of first-person video featuring over 700 participants in 73 cities going about their daily lives.
Facebook funded the project through academic grants to each of the participating universities. And as a supplement to the work, researchers from Facebook Reality Labs (Facebook’s AR- and VR-focused research division) used Vuzix Blade smartglasses to collect an additional 400 hours of first-person video data in staged environments in research labs.