All in the algorithm? Social media use drops when other feeds provided – dpa international

Social media platforms have long been criticized for using opaque algorithms that dictate what users see on their feeds. “The notion that such algorithms create political ‘filter bubbles,’ foster polarization, exacerbate existing social inequalities, and enable the spread of disinformation has become rooted in the public consciousness,” according to the researchers, who were led by Andrew M. Guess of Princeton University. Keeping user and algorithm apart, however, “did not change” peoples’ political attitudes, knowledge and offline behaviours, they found, suggesting that when account holders encountered views at odds with their own, they were inclined to just stop scrolling and do something else. 

Scientists lift shutter on self-charging cameras – dpa international

A “fundamental” advance with image technology could lead to lenses that don’t need filters and cameras that recharge themselves. That’s according to the US National Science Foundation (NSF), after Penn State University researchers announced recently they had come up with a device that produces photos by mimicking the human eye. The device emulates the eye’s red, green and blue photoreceptors as well as the neural network, the NSF said, praising the university’s “breakthrough” in “realizing perovskite narrowband photodetection devices – from materials synthesis to device design to systems innovation.” The NSF believes the development “may represent a way around using filters found in modern cameras that lower resolution and increase cost and manufacturing complexity.”

Facebook-linked Twitter competitor facing delayed EU launch – dpa international

Meta’s would-be rival to Twitter, an app called Threads, will not be available for download in the European Union when it launches elsewhere this week, according to Ireland’s Data Protection Commission (DPC). Spokesman Graham Doyle said “we’ve been in contact with Meta who have confirmed that they don’t plan to roll out the app in the EU at present.”

Passive social media use contributes to worry and stress – dpa international

There’s an old saying about feeling more alone in a crowd than in a room by oneself. So, despite there being more than 4.75 billion social media users around the world, it probably should not be a surprise that many of them feel lonely. Most vulnerable to such feelings, which can extend to stress, anxiety and even depression in some cases, are those who scroll passively through feeds – an online version of living vicariously – without posting themselves. The University of Bournemouth in Britain found that young adults “who use social media to browse content of other users are more likely to experience anxiety, depression and stress than more active users who share their own content.” The university surveyed almost 300 social media users and found that not only did thumbing through third-party material give a user the blues, but also that stress was reduced in turn if a user just posted his or her own material without interacting with others.

Survey shows Anglophones unable to tell between AI and human tweets – dpa international

“This”. “I can’t even”. “Ratio’d”. “Beast mode”. “Cheat code”. “Same”. Questions written minus the question mark as if they’re statements. Social media such as Twitter are riddled with such quick-to-age neologisms, ironically often used in an attempted quip or profundity. Their use brings to mind a segment in the Monty Python film “The Life of Brian” where the eponymous central character addresses a crowd of devotees gathered under his bedroom window. “You’re all individuals,” Brian tells the throng, to which they reply, in unison, “Yes, we are all individuals.” “You’re all different,” he continues. “Yes, we are all different,” they chime. With life unwittingly imitating art, perhaps it is little wonder that AI bots can generate tweets that, to some eyes, read no different to tweets posted by humans.

People too confident they can spot deepfake videos, according to survey – dpa international

Joe Biden in drag, drinking Bud Light; Donald Trump as a shady lawyer in ‘Breaking Bad’. The two likeliest contestants in the 2024 US presidential election have both been subjects of recent ‘deepfake’ hoax videos. Boosted by ‘generative” artificial intelligence (AI) tools such as ChatGPT, deepfakes “have reached a level of sophistication that prevents detection by the naked eye,” according to the publishers of a recent multi-nation survey, which found people to be too sure of their own ability to spot a fake. The 2023 Online Identity Study, conducted by Censuswide for Jumio, a California-based online safety business, canvassed more than 8,000 people across Britain, Mexico, Singapore and the US about deepfakes. Around two-thirds of those asked claimed awareness of the technology, though that ranged from just 56% in Britain to almost 90% in Singapore, with over half saying they were confident they could tell the difference between an authentic clip and a mock-up.

Around half of all newsrooms using AI, going by industry report – dpa international

Artificial intelligence platforms such as OpenAI’s ChatGPT are taking root in the media, going by a survey by the World Association of Publishers (WAN-IFRA), in which 49% of respondents said their newsrooms were deploying the bots. “The primary use case is the tools’ capability to digest and condense information, for example for summaries and bullet points,” according to the survey report, which sought to allay fears that lazy hacks are already using AI to churn out news. The report was published in late May ahead of warnings – including from the heads of OpenAI and Google DeepMind – that AI could make humanity extinct. Less ominous sound bites – that AI could cause the extinction of many jobs – have been doing the rounds for years. But whatever about humanity being extinguished, the survey suggests journalists do not see so-called generative AI as fatal to their careers, despite the churn and turmoil and wrought across the media industry since the spread of the internet in the 1990s.

Legalese and other jargon leaving people in the dark, lawyers included – dpa international

There is hope for the rest of us, as the saying goes, if even lawyers do not understand their own idioms. After asking the obvious question – “why do lawyers write in such a convoluted manner?” – a Massachusetts Institute of Technology (MIT) team found that lawyers, despite apparently being responsible for their own arcane jargon, often don’t really like it. Less forgivably, sometimes they aren’t even too sure what it all means. “Across two pre-registered experiments, we find that lawyers, like laypeople, were less able to understand and recall ‘legalese’ contracts than content of equivalent meaning drafted in a simplified register,” they said. In plain English, that means lawyers can better remember and comprehend their own contracts if they use plain English.

Cabin fever: Covid lockdowns left people foggy about big events – dpa international

Stay at home, “meet” on Zoom, no travel beyond 5km, no pub or cinema or football or school, hospital treatments cancelled or put back, holidays out of the question, flatten the curve. For many people, after a couple of weeks of the Covid lockdown routine, one day ran into the next as a fugue of repetitiveness and lethargy clouded minds and memories. Life went on though, after a fashion, and with it some attenuated versions of what otherwise would be remembered as big news: the last-ditch hammering out of a Brexit deal between Britain and the European Union, the blocking for months of the Suez Canal by a stricken container ship, the mushroom cloud explosion at a fertiliser factory in Beirut, to list a few headline-grabbers. But it turns out people have but vague recollection of when such events happened.

AI social media moderators harsher than humans – dpa international

The alarm around so-called artificial intelligence (AI) and machine learning has prompted the technology’s “godfather” Geoffrey Hinton to lament his life’s work. AI, the warnings go, could see widespread job losses or worse, should it “get smarter than people”, as Hinton put it when he quit his job at Google in early May. Hinton’s concerns followed tech business bosses putting their names to a letter calling for a six-month pause on AI advances, lest we “develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us” and in turn “risk loss of control of our civilization.” A less hair-raising warning came on May 10, with the publication in Science Advances of research showing AI to be a harsher judge of social media posts than human counterparts.