A few hours after Twitter announced it would accept Elon Musk’s offer to buy, the SpaceX CEO clarified his plans for the social network. Musk outlined the radical changes he intended to make in a press release, including opening the algorithms that determine what users see in their feed.
Musk’s ambition to open-source Twitter’s algorithms is driven by his longstanding concern about potential political censorship on the platform, but this move is unlikely to achieve the effect he desires. Instead, it could lead to a host of unexpected problems, experts warn.
Beyond Musk’s relationship with authority, his desire for algorithmic transparency coincides with the wishes of politicians around the world. The idea has been a cornerstone of attempts by multiple governments to fight against Big Tech in recent years. For example, Melanie Dawes, chief executive of Ofcom, the UK’s social media regulator, said social media platforms need to explain how their code works. The law of 23 April on digital services recently approved by the European Union will also oblige platforms to offer greater transparency.
In the United States, Democratic senators put forward proposals for the Algorithmic Accountability Act in February 2022. Their goal is to bring new transparency and oversight of the algorithms that govern our timelines, news feeds, etc. In theory, allowing Twitter’s algorithm to be visible to others and adaptable to competitors means that someone could simply copy the Twitter source code and release a version under a different name. Much of the Internet runs on open-source software, the most famous being OpenSSL, a security toolkit used by much of the Web, which suffered a major security breach in 2014.
There are already examples of open-source social networks. Mastodon, a microblogging platform created after concerns about Twitter’s dominance surfaced, allows users to inspect its code posted on the GitHub software repository. But seeing the code behind an algorithm doesn’t necessarily reveal how it works, and it certainly doesn’t offer the average person much information about the structures and business processes that go into its creation.
Furthermore, there is not a single algorithm that controls Twitter. “Some of them will determine what people see timelines in terms of trends, content or suggested leads,” says Catherine Flick, a computer science researcher at De Montfort University in the UK. The algorithms that people will be primarily interested in control what content appears in users’ timelines, but even that won’t be extremely useful without the training data.
“It seems clear that what really matters is how the algorithms were developed,” says Jennifer Cobbe, a researcher at Cambridge University. This is due to concern that artificial intelligence algorithms may perpetuate the human biases contained in the data used to train them. Who develops algorithms, and what data they use can make a significant difference in results?
For Cobbe, the risks outweigh the potential benefits. The computer code does not give us any information about how the algorithms were trained or tested, what factors or considerations were involved, or what aspects were prioritised in the process, so open sourcing may not make a significant difference. for transparency on Twitter. In the meantime, it could introduce some significant security risks.
Companies often publish impact assessments that probe and test their data protection systems to highlight weaknesses and flaws. When discovered, it is repaired, but the data is often obscured to prevent security risks. Twitter’s open-source algorithms would make the website’s entire code base accessible to all, potentially allowing attackers to scan the software and find vulnerabilities to exploit.
The open-source Twitter algorithms could create another problem: providing more system knowledge tools to the bad guys, making one of Musk’s other stated goals difficult: “defeating all spambots”. “The risk is not so much related to unveiling how the algorithm code works and the ability to discern how Twitter sets posts on users’ linetime,” says Eerke Boiten, professor of cybersecurity at De Monfort University.
There are other more troubling unwanted consequences. One of the main concerns is the inevitable contrasts that will arise when people try to analyse the algorithm in an amateur way. This could lead to even more poisonous and fruitless debates. “The open-source algorithm will not solve any problems with bias,” says Flick, “and taking action to correct bias will undoubtedly be viewed through a political rather than a technological lens, as a recent article by Twitter researchers demonstrates, which shows how the algorithms promote the contents of the right more readily than those of the left”.