Vai al contenuto

UK Online Safety Act

From 16th of March, all websites are required to comply by Ofgem with the UK Online Safety Act.

I3Italy.org is a small website that is outside the scope of most measures mandated by the act, but not all of those. The act is indeed very broad, and places an undue burden of compliance on many small service providers.

We interact with users through private channels, moderated comments on the website, and social media (safeguarded by Meta’s policies). There is no user-to-user interaction on our website. Any such interaction on social media platforms is under Meta’s content moderation systems, deemed by Ofgem to be within the scope of the OSA. Furthermore, we proactively monitor pages and comments, and staff can delete or hide content even if approved by Meta if necessary.

Nevertheless, we place a lot of emphasis on user safety from scams and harm in general, and prevention of that is indeed one of our core, founding principles.

We are required to publish a few statements, available below, and for further transparency we publish our full risk assessment, subject to yearly review.

A copy of the risk assessment using the Ofgem template dated 14th March 2025 can be downloaded and read below.

Terms and conditions

1. Protection from Illegal Content
1.1. www.i3italy.org (“the Platform”) is committed to ensuring that individuals using its services are protected from illegal content in accordance with applicable laws and regulations.
1.2. The Platform implements measures to identify, prevent, and remove illegal content, including but not limited to:

  • Prohibition of content that constitutes or facilitates criminal activity, incites violence, or infringes upon intellectual property rights.
  • A content moderation policy that includes both automated and manual review processes.
  • Swift removal of content upon substantiated reports of illegality, in line with legal obligations and when approved by mistake.
  • Cooperation with law enforcement agencies and regulatory bodies where required by law.

1.3. Protection measures specific to priority illegal content categories:

  • Terrorism Content: The Platform employs moderation of every user-submitted content before it can appear on the website. We rely on Meta’s tools for content posted on social media channels. We remain proactive and any reported verified terrorist content approved by mistake or posted on Meta is removed within the shortest possible timeframe following detection or notification.
  • Child Sexual Exploitation and Abuse (CSEA) Content: The Platform employs moderation of every user-submitted content before it can appear on the website. We rely on Meta’s tools for content posted on social media channels. We remain proactive and any reported verified CSEA content approved by mistake or posted on Meta is removed within the shortest possible timeframe following detection or notification.
  • Other Priority Illegal Content: This includes, but is not limited to, hate speech, fraud, and threats of violence. The Platform employs moderation of every user-submitted content before it can appear on the website. We rely on Meta’s tools for content posted on social media channels. We enact content moderation combined with human oversight to ensure timely removal while balancing legitimate free expression considerations.

1.4. The Platform prioritizes minimization of exposure time for any identified priority illegal content by implementing rapid-response protocols and ensuring continuous monitoring of flagged material.

1.5. Where the Platform is alerted by a person to the presence of illegal content, or becomes aware of such content through proactive detection methods, it takes immediate action to assess, restrict, or remove the content as appropriate and, where necessary, notify relevant authorities.

2. Use of Proactive Technology
2.1. To ensure compliance with the Platform’s duties regarding illegal content, the following proactive technologies are employed:

  • Automated Content Detection: The Platform utilizes external algorithms provided by Jetpack, Wordfence and Akismet to scan and identify spam and potentially illegal content at the point of upload and in real-time monitoring.
  • Human Review Escalation: all content is subject to manual review by a dedicated compliance team before being published, to verify the determination made by automated systems and apply appropriate enforcement actions.
  • Continuous Improvement: The Platform regularly updates its detection mechanisms to enhance effectiveness in response to evolving threats and regulatory requirements.

3. Complaints Handling and Resolution Processes
3.1. The Platform maintains a structured complaints process to address concerns regarding illegal content. Users may submit complaints through the following means:

  • Email Reporting: Direct complaints can be submitted via info@i3italy.org.
  • Law Enforcement and Regulatory Reports: The Platform acknowledges and prioritizes legally mandated removal requests from authorities.

3.2. Upon receipt of a complaint:

  • The Platform will acknowledge receipt within 7 business days.
  • An initial assessment will be conducted to determine the nature and validity of the complaint.
  • If the complaint concerns potentially illegal content, it will be reviewed by a compliance officer.
  • Where necessary, the content will be removed, restricted, or reported to relevant authorities.
  • The complainant will receive a resolution response within 30 business days, outlining actions taken where permissible under applicable privacy and regulatory laws.

3.3. The Platform maintains records of all complaints and associated actions for regulatory compliance and audit purposes.

4. Amendments and Updates
4.1. This statement is subject to periodic review and may be updated to reflect changes in legal obligations, regulatory requirements, and the evolution of proactive technologies.