Similar Posts
The World Health Organisation (WHO) and some 50 countries issued a warning on Nov 8 at the United Nations about the rise of ransomware attacks against hospitals, with the United States specifically blaming Russia.
Ransomware is a type of digital blackmail in which hackers encrypt the data of victims – individuals, companies or institutions – and demand money as a “ransom” in order to restore it.
Such attacks on hospitals “can be issues of life and death,” according to WHO head Tedros Adhanom Ghebreyesus, who addressed the UN Security Council during a meeting on Nov 8 called by the United States.
“Surveys have shown that attacks on the healthcare sector have increased in both scale and frequency,” Dr Ghebreyesus said, emphasising the importance of international cooperation to combat them.
“Cybercrime, including ransomware, poses a serious threat to international security,” he added, calling on the Security Council to consider it as such.
A joint statement co-signed by over 50 countries – including South Korea, Ukraine, Japan, Argentina, France, Germany and the United Kingdom – offered a similar warning.
“These attacks pose direct threats to public safety and endanger human lives by delaying critical healthcare services, cause significant economic harm, and can pose a threat to international peace and security,” read the statement, shared by US Deputy National Security Advisor Anne Neuberger.
The statement also condemned nations which “knowingly” allow those responsible for ransomware attacks to operate from.
At the meeting, Ms Neuberger directly called out Moscow, saying: “Some states – most notably Russia – continue to allow ransomware actors to operate from their territory with impunity.”
France and South Korea also pointed the finger at North Korea.
Russia defended itself by claiming the Security Council was not the appropriate forum to address cybercrime.
“We believe that today’s meeting can hardly be deemed a reasonable use of the Council’s time and resources,” said Russian ambassador Vassili Nebenzia.
“If our Western colleagues wish to discuss the security of healthcare facilities,” he continued, “they should agree in the Security Council upon specific steps to stop the horrific… attacks by Israel on hospitals in the Gaza Strip.” AFP
“Ofcom to Detail Action Required from Social Media Companies Over Illegal Content – December Deadline Looming for Compliance”
LONDON – Britain’s media regulator Ofcom said on Oct 17 that it would detail what action it expected social media companies to take over illegal content on their platforms in December, saying it expected swift action or they would face consequences.
Ofcom, which is responsible for implementing the government’s Online Safety Bill, said the platforms would have three months to complete their own illegal harms risk assessments after the publication of its demands.
“The time for talk is over,” Ofcom’s Chief Executive Melanie Dawes said on Oct 17. “From December, tech firms will be legally required to start taking action, meaning 2025 will be a pivotal year in creating a safer life online.”
She said the regulator had already seen positive changes, but expectations were going to be high.
“We’ll be coming down hard on those who fall short,” she said.
Ofcom said better protections had already been introduced by Meta, the owner of Instagram and Facebook, and Snapchat which have brought in changes to help prevent children being contacted by strangers.
Britain’s new online safety regime, which became law last year, requires social media companies to tackle the causes of harm, particularly for children, by making their services safer.
If companies do not comply with the new law, they could face significant fines and, in the most serious cases, their services could be blocked in Britain. REUTERS
WASHINGTON – A sophisticated breach of US telecommunications systems has extended to the presidential campaigns, raising questions about the group behind the attack and the extent of its efforts at collecting intelligence.
It was unclear what data was taken in the attack. The far-reaching operation has been linked to the Chinese government and attributed to a group experts call Salt Typhoon.
Investigators believe hackers took aim at a host of well-connected Americans, including the presidential candidates – reflecting the scope and potential severity of the hack.
Here’s what to know.
What is Salt Typhoon?
Salt Typhoon is the name Microsoft cybersecurity experts have given to a Chinese group suspected of using sophisticated techniques to hack into major systems – most recently, US telecommunication companies.
The moniker is based on Microsoft’s practice of naming hacking groups after types of weather – “typhoon” for hackers based in China, “sandstorm” for efforts by Iran and “blizzard” for operations mounted by Russia. A second term, in this case “salt,” is used to denote the type of hacking.
Experts say Salt Typhoon seems to be focused primarily on counterintelligence targets, unlike other hacking groups that may try to steal corporate data, money or other secrets.
What do US officials think Salt Typhoon has done?
National security officials have gathered evidence indicating the hackers were able to infiltrate major telecom companies, including but not limited to Verizon.
The New York Times reported on Oct 25 that among the phones targeted were devices used by former President Donald Trump and his running mate, Senator JD Vance of Ohio. The effort is believed to be part of a wide-ranging intelligence-collection effort that also took aim at Democrats, including staff members of both Vice President Kamala Harris’ campaign and Senator Chuck Schumer of New York, the majority leader.
How serious is this hacking?
National security officials are still scrambling to understand the severity of the breach, but they are greatly concerned if, as it appears, hackers linked to Chinese intelligence were able to access US cellphone and data networks. Such information can provide a wealth of useful intelligence to a foreign adversary like China.
To some degree, the breach represents a continuation of data collection on the types of targets that spies have been gathering for decades. In this instance, however, the sheer quantity and quality of the information Salt Typhoon may have gained access to could put the intrusion into its own category, and suggests that US data networks are more vulnerable than officials realised.
What did the hackers get?
At this stage, that is still unclear. One major concern among government officials is whether the group was able to observe any court-ordered investigative work, such as Foreign Intelligence Surveillance Act collection – a highly secretive part of American efforts to root out spies and terrorists.
No one has suggested yet that the hackers were able to essentially operate inside individual targets’ phones. The more immediate concern would be if they were able to see who was in contact with candidates and elected officials, and how often they spoke and for how long. That kind of information could help any intelligence agency understand who is close to senior decision-makers in the government.
People familiar with the investigation say it is not yet known if the hackers were able to gain access to that kind of information; investigators are reasonably confident that the perpetrators were focused on specific phone numbers associated with presidential campaigns, senior government leaders, their staff members and others.
Like the weather, hacking is never really over, and the Salt Typhoon breach may not be over either. It is also possible that the United States may never learn precisely what the hackers got. NYTIMES
SEATTLE – Starbucks said the aftermath of a ransomware attack on a software supplier has been affecting its ability to pay baristas and manage their schedules, the company’s spokesperson said on Nov 25.
The coffee giant said that an outage at a third-party vendor has disrupted a back-end Starbucks process that enables employee scheduling and time tracking.
The outage is not impacting its customer service, and the company was working to ensure its employees were fully paid for their hours worked with limited disruption or discrepancy, according to a Starbucks’ spokesperson.
UK-based Blue Yonder, which provides supply chain software to Starbucks and other retailers, according to the Wall Street Journal, said on Thursday that it has experienced disruptions due to a ransomware attack and it is working to fix the issue. REUTERS
SINGAPORE – A group of Singapore Sports School students were caught and punished in November for creating and circulating deepfake nude images of their female schoolmates.
Their actions have ignited discussions about how the young – especially young girls – can best protect themselves from such online harms, and how they can respond if they are victimised by deepfakes.
This is, of course, a global issue.
In South Korea, for instance, a Telegram channel with more than 220,000 members was reportedly used to create and share AI-generated pornographic images.
In its 2023 Survey on Online Harms in Singapore, non-profit group SG Her Empowerment (SHE) reported that 9 per cent of the 1056 Singaporean residents older than 15 surveyed experienced image-based sexual abuse, including via altered images or videos.
Yet, SHE’s Safeguarding Online Spaces survey, also conducted in 2023, found that four in 10 young people reported low awareness of self-help tools for online harms, while five in 10 reported low awareness of legal recourse options.
If you are unsure where to go and what to do if you have been targeted by deepfakes, here are some answers by experts to pressing questions you might have.
Q: What’s the first thing to do if I become the target of deepfake nudes?
A: The most important first step is to document evidence, said experts interviewed.
Taking screenshots of posts or videos, recording links or URLs, and saving messages and timestamps all go a long way when reporting the incident to authorities or social media platforms.
Singapore University of Technology and Design Professor Roy Lee, who specialises in artificial intelligence, emphasised that while the knee-jerk reaction may be to report the image or video as soon as possible to have it removed, recording as much evidence as possible serves crucial purposes.
He said: “Harmful content can be deleted, altered or moved by the perpetrator, making it difficult to prove that the incident occurred. Screenshots act as a timestamped record, ensuring that the evidence is not lost.
“Platforms and authorities (also) often require concrete evidence when investigating cases of online harm. Having screenshots can strengthen the case and increase the likelihood of action being taken against the offender.”
But even if you don’t take a screenshot, all is not lost.
Centre head for SheCares@SCWO Support Centre Lorraine Lim said that “law enforcement will do their best to investigate using the information available” and “police may collaborate with platforms to retrieve relevant data if possible”.
A: Experts say you should report harmful content to the social media platform that is hosting it. Many platforms have policies against such content, and each has its own mechanisms for reporting.
Director of advocacy and research at the Association of Women for Action and Research (Aware) Sugidha Nithiananthan said: “Familiarising yourself with online platforms’ policies for reporting and removing harmful content beforehand can save precious time if you need to act quickly.”
For instance, Facebook and Instagram include a ‘Report’ link on nearly every post for users to report content that violates policy. WhatsApp only allows users to report other users and groups, but not individual messages. Conversely, Telegram users can only flag individual messages and images.
You should also make a police report if you have been targeted by deepfake nudes or have been the victim of online harms. A police spokesperson told The Straits Times that these harms may fall under a variety of offences including the Protection from Harassment Act (Poha) and sexual-and-voyeurism-related offences.
If there is no urgency, you are advised to visit the nearest police station or file a police report online if the matter does not require immediate police attention.
While in-person reporting at the police station allows officers to ask questions that provide helpful and relevant context, some victims may be too distressed to share their experience verbally, and typing an online report might be more comfortable for them.
Investigation officers will follow up on submitted reports to gather additional details when necessary.
Q: What are my next steps if I want to pursue legal action against the perpetrators?
A: There are laws within the Penal Code, Films Act and Poha that exist to protect victims of deepfake nudes and other forms of image-based sexual abuse.
Experts said that those who want to pursue immediate legal action should file a protection order under Poha – a court order that protects victims of harassment by prohibiting perpetrators from continuing harassment behaviour.
Director of Guardian Law Liane Yong explained that Poha protects victims by criminalising behaviour or communication that both intentionally and unintentionally “causes harassment, alarm or distress”.
To file a court order, one must be at least 21 years old; applications for all victims below 21 must be done through an older representative.
Before filing a court order, those targeted should complete a pre-filing assessment on the Community Justice and Tribunals System (CJTS) e-platform to determine the complexity of their cases. This will determine the e-platform (CTJS for simplified cases or eLitigation for more complex cases) that victims submit their applications to.
Victims must then submit applications to the respective e-platforms. Applications generally include details about the harassment, relevant evidence and information about the types of remedies sought. Application fees range from $30 to upwards of $100 based on the platform and type of claim.
A: You can reach out to trusted adults – parents and teachers – for support. Many non-profit organisations also provide emotional, legal and technical support for victims of such online harm.
The SheCares@SCWO support centre is Singapore’s first support centre for online harms. It provides free legal advice through clinics with volunteer lawyers, free counselling support and even accompanies victims down to the police station to file police reports if need be.
Similarly, the Aware Sexual Assault Care Centre provides support for victims, including a free legal clinic, assistance with gathering evidence, filing police reports or Magistrate’s complaints, and applying for Poha court orders.
Q: How do I avoid becoming a victim of deepfake nudes and other online harms?
A: “With advanced technology such as AI tools becoming widely available and easier to use, anyone with an online presence is vulnerable, so it’s important to exercise caution when navigating the online world,” said Ms Lim.
She advised limiting who can see posts through privacy settings and avoiding sharing highly personal information such as full names or addresses. She also warned young people to be wary of unfamiliar follower requests and suspicious behaviour on social media.
Ms Lim said: “Be aware of overly-friendly accounts, or accounts that are quick to offer gifts or offers that are too good to be true.”
“Love-bombing tactics – providing excessive attention, making grand gestures or offering exorbitant gifts, pushing for commitment or exhibiting controlling behaviour – are a sign that something is wrong.”
But while these steps may help reduce your chances of becoming a victim, it always remains a possibility.
Ms Nithiananthan said: “There is very little a person can do to entirely protect themselves from violence and harm, both online and offline.
“When we place too much emphasis on the victim protecting herself, we imply that it is her duty to avoid this abuse. It is this type of thinking that downplays the accountability of perpetrators and wrongly shifts focus to the victim’s actions.”
Experts agreed that over-focusing on what an individual can do to protect themselves may make victims believe that what they experienced was their fault, and stand in the way of them making official reports.
Prof Lee said one of the best ways to reduce deepfakes and online harms is the act of reporting harmful content itself.
“Reporting… contributes to preventing harm to the next potential victim.
“I encourage victims to take action – for themselves and for the community. Together, we can improve online safety if each of us stands up against malicious content.”
Australia’s ban on children under 16 using social media has sparked global conversations about online safety and youth development (Australia’s world-first social media ban for kids under 16 attracts mixed reaction, Nov 29).
While the intentions behind this policy – protecting children from cyber bullying, exploitation and harmful content – are commendable, it raises critical questions about balance, enforcement and unintended consequences.
As a 14-year-old teenager who does not use social media, I can see both sides of the argument.
On the one hand, platforms like Instagram, TikTok and Discord can be overwhelming, exposing young users to unhealthy comparisons, misinformation and even predatory behaviour.
Many parents and educators worry about the long-term effects of excessive screen time, often spent on social media platforms, on mental health and academic performance.
On the other hand, outright bans overlook the positive aspects of social media. For many teens, these platforms are a lifeline for creative expression, activism and staying connected, especially in an increasingly digital world.
Moreover, enforcing such a law could be challenging, as children are often tech-savvy enough to find workarounds.
Rather than outright bans, a better solution might involve empowering young users through digital literacy education. Teaching children how to navigate online spaces safely, recognise misinformation and manage screen time could address the root problems without cutting children off from valuable opportunities.
Singapore can learn from Australia’s debate as we navigate our own challenges with digitalisation. Instead of waiting for government intervention, schools, families and tech companies should work together to create a safer online environment while respecting the voice and agency of young people.
The internet isn’t going anywhere, and neither are we. Let us try to work together to ensure we can use it wisely.
Avishi Gurnani, 14
Secondary 2