The Australian Transaction Reports and Analysis Centre (Austrac), alongside the Department of Home Affairs, has recommended a change to the way payments features on digital platforms are used as it hopes to prevent their misuse by terrorists and violent extremist figures.

In a submission [PDF] to the Senate Select Committee on Australia as a Technology and Financial Centre as part of its inquiry into the sector in Australia, Austrac said social media and other platforms, both mainstream and alternative, offer payment options that continue to be exploited by violent extremist actors as a means of financing.

“For example, violent extremists are able to exploit YouTube’s ‘Super Chat’ feature, which gives users the ability to receive funds under the pretext of supporters paying to ask questions,” it wrote.

“The use of this system by violent extremist actors has been observed in the UK and Australia.”

See also: Law enforcement leaning on Austrac as legislation ‘lags’ behind technology

Austrac said Patreon and other crowdfunding platforms have been used by violent extremist actors as a means of revenue raising in the United States and Australia, and that e-commerce platforms like Amazon and Redbubble have also been exploited by violent extremists seeking to sell manifestos, t-shirts, and other merchandise.

“The majority of platforms already prohibit categories such as ‘hate speech’ and ‘incitement to violence’ in their terms of service. Therefore, requiring platforms to prohibit this conduct is unlikely to produce an outcome more beneficial than the status quo,” it said.

An alternative, the government entity suggested, could be to introduce safeguards to the financial models of these platforms to prevent exploitation by terrorists and violent extremists.

“YouTube introduced one such safeguard by demonetising videos on ‘sensitive topics’ (removing the ability for those posting these kinds of videos to earn money from the videos and live streams),” it said.

“Following the 2019 Christchurch terrorist attack several platforms introduced live-stream safeguards that indirectly limit the use of live-streaming tools for funding.

“For consistent and effective safeguarding measures, any safeguard must be adopted as industry best practice.”

RELATED COVERAGE

Facebook, Google, Twitter caution Australia against a blanket terrorism content ban

The three technology giants have asked the government consider other measures before calling for a ban of all that is deemed violent extremist or terrorism content.

Australia’s eSafety and the uphill battle of regulating the ever-changing online realm

The eSafety Commissioner has defended the Online Safety Act, saying it’s about protecting the vulnerable and holding the social media platforms accountable for offering a safe product, much the same way as car manufacturers and food producers are in the offline world.

Tech giants and cops at least agree thwarting terrorist or extremist activity is a joint effort

Social media platforms say they want to work with law enforcement and policymakers to stop their platforms from being used to promote extremist movements and radicalism in Australia.



Source link