The Importance of Cybersecurity in Today’s Digital World

The Importance of Cybersecurity in Today’s Digital World

In today’s digital world, cybersecurity is more important than ever. With the increase in online shopping, banking, and communication, individuals and organizations are at risk of cyber attacks. This blog post will discuss the importance of cybersecurity and provide tips on how to prevent future attacks. The Risks of Cyber Attacks Cyber attacks can have…

“Ofcom to Detail Action Required from Social Media Companies Over Illegal Content – December Deadline Looming for Compliance”

“Ofcom to Detail Action Required from Social Media Companies Over Illegal Content – December Deadline Looming for Compliance”

LONDON – Britain’s media regulator Ofcom said on Oct 17 that it would detail what action it expected social media companies to take over illegal content on their platforms in December, saying it expected swift action or they would face consequences.

Ofcom, which is responsible for implementing the government’s Online Safety Bill, said the platforms would have three months to complete their own illegal harms risk assessments after the publication of its demands.

“The time for talk is over,” Ofcom’s Chief Executive Melanie Dawes said on Oct 17. “From December, tech firms will be legally required to start taking action, meaning 2025 will be a pivotal year in creating a safer life online.”

She said the regulator had already seen positive changes, but expectations were going to be high.

“We’ll be coming down hard on those who fall short,” she said.

Ofcom said better protections had already been introduced by Meta, the owner of Instagram and Facebook, and Snapchat which have brought in changes to help prevent children being contacted by strangers.

Britain’s new online safety regime, which became law last year, requires social media companies to tackle the causes of harm, particularly for children, by making their services safer.

If companies do not comply with the new law, they could face significant fines and, in the most serious cases, their services could be blocked in Britain. REUTERS

“Singapore Issues New Guidelines to Protect Businesses from AI Security Risks”

“Singapore Issues New Guidelines to Protect Businesses from AI Security Risks”

SINGAPORE – Rogue chatbots that spew lies or racial slurs may be just the beginning, as maliciously coded free chatbot models blindly used by businesses could unintentionally expose sensitive data or result in a security breach.

In new guidelines published on Oct 15, Singapore’s Cyber Security Agency (CSA) pointed out these dangers amid the artificial intelligence (AI) gold rush, and urged businesses to test what they plan to install rigorously and regularly.

This is especially crucial for firms that deploy chatbots used by the public, or those linked to confidential customer data.

Frequent system tests can help weed out threats like prompt injection attacks, where text is crafted to manipulate a chatbot into revealing sensitive information from linked systems, according to the newly published Guidelines on Securing AI Systems .

The guidelines aim to help businesses identify and mitigate the risks of AI to deploy them securely. The more AI systems are linked to business operations, the more they should be secured.

Announcing the guidelines at the annual Singapore International Cyber Week (SICW) at the Sands Expo and Convention Centre on Oct 15, Senior Minister and Coordinating Minister for National Security Teo Chee Hean said the manual gives organisations an opportunity to prepare for AI-related cyber-security risks while the technology continues to develop.

Mr Teo said in his opening address that managing the risks that come with emerging technology like AI is an important step to build trust in the digital domain. He urged the audience to learn lessons from the rapid rise of the internet.

“When the internet first emerged, there was a belief that the ready access to information would lead to a flowering of ideas and the flourishing of debate. But the internet is no longer seen as an unmitigated good,” he said, adding that there is widespread recognition that it has become a source of disinformation, division and danger.

“Countries now recognise the need to go beyond protecting digital system to also protecting their own societies,” he said. “We should not repeat these mistakes with new technologies that are now emerging.”

The ninth edition of the conference is being held between Oct 14 and 17 and features keynotes and discussion panels by policymakers, tech professionals and experts.

AI owners are expected to oversee the security of AI systems from development, deployment to disposal, according to CSA’s guidelines, which do not address the misuse of AI in cyber attacks or disinformation.

In a statement released on Oct 15, CSA said: “While AI offers significant benefits for the economy and society… AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system.”

Organisations using AI systems should consider more frequent risk assessments than with conventional systems to ensure tighter auditing of machine learning systems.