November 11, 2022 - 7:30am

Earlier this week, President Joe Biden was asked if the U.S. government should investigate Elon Musk’s acquisition of Twitter on national security grounds. The President responded that it was “worthy of being looked at”. Yet, strangely, there was no mention of TikTok, which poses a far greater threat to American national security.

For months now Twitter has struggled to keep its most active users engaged, and has plateaued at around 300 million monthly users (by comparison, TikTok has over 1 billion). Twitter has also been struggling financially because it has forgotten that the future is video: relentlessly addictive, auto-playing, effectively infinite video. In 2021, Android phone users spent over 16 trillion minutes on Tikok; in 2022, they spent an average of 95 minutes a day on the app. Facebook reports far greater engagement with video content than other media, and head of Instagram Adam Mosseri confirmed that the app is moving towards becoming a video-sharing platform. It’s no wonder that Twitter is under pressure to reinvent itself.

Yet while users worry about what Elon Musk will do with the 12 terabytes of data generated by Twitter every day, over 112 million U.S. TikTok users continue to give away personal information to an app with known links to the Chinese government and ‘aggressive’ data-harvesting tactics. TikTok tracks and collects users’ locations, IP addresses, calendars, contact lists, browsing and search histories, the videos they watch and how long they watch them, sharing all this with more third parties than any other app. TikTok uses these ‘inferred demographics’ to perpetuate stereotypes and polarisation, such as pushing violent videos on ethnic minorities.

We know the risks. We know that TikTok was used to monitor the locations of American citizens. We know there have been numerous inquiries into how TikTok processes children’s data. We know that, despite being stored remotely, all data can be accessed from China. Yet we are caught in this ‘privacy paradox’: aware of the need to protect our data online, yet willing to haphazardly agree to any term or condition as long as we can consume our content. This apathy and indifference is insidiously selective. As a teacher my students, for example, will all have private Instagram accounts, but would not think twice about how much they give away from every pause, flick and double tap on a TikTok video.

The danger of TikTok comes down to this: users are shown content from accounts they do not follow, and therefore are constantly exposed to videos they did not ask for. Meta is following suit — by the end of 2023, it will more than double the proportion of material on Facebook and Instagram recommended by AI — but TikTok’s ubiquitous use of video is more concerning because video is technically much, much harder to moderate. AI has to sift through the thousands of static images that make up a video to recognise potentially harmful content (for example a gun), determine its context (is it being shown in a violent or suggestive way), while also monitoring audio (gun shots that may happen off screen) and inferring other layers of meaning.

Whether the answer to these dangers is greater education or regulation and legislation remains to be seen, but the Twitter hype will soon die down: in the meantime we have much greater things to worry about.


Kristina Murkett is a freelance writer and English teacher.

kristinamurkett