Similar Posts
WASHINGTON – A U.S. Senate Judiciary subcommittee overseeing technology issues will hold a hearing Tuesday on Chinese hacking incidents, including a recent incident involving American telecom companies.
The hearing to be chaired by Senator Richard Blumenthal will review the threats “Chinese hacking and influence pose to our democracy, national security, and economy,” his office said, adding the senator plans “to raise concerns about Elon Musk’s potential conflicts of interest with China as Mr. Musk becomes increasingly involved in government affairs.”
Musk, the head of electric car company Tesla, social media platform X and rocket company SpaceX, emerged during the election campaign as a major supporter of U.S. President-elect Donald Trump. Trump appointed him as co-head of a newly created Department of Government Efficiency to “slash excess regulations, cut wasteful expenditures, and restructure Federal Agencies.”
Musk, who was in China in April and reportedly proposed testing Tesla’s advanced driver-assistance package in China by deploying it in robotaxis, did not immediately to requests for comment.
The hearing will include CrowdStrike Senior Vice President Adam Meyers and Telecommunications Industry Association CEO David Stehlin, Strategy Risks CEO Isaac Stone Fish and Sam Bresnick, research fellow at the Center for Security and Emerging Technology at Georgetown University,
Last week, U.S. authorities said China-linked hackers have intercepted surveillance data intended for American law enforcement agencies after breaking in to an unspecified number of telecom companies, U.S. authorities said on Wednesday.
The hackers compromised the networks of “multiple telecommunications companies” and stole U.S. customer call records and communications from “a limited number of individuals who are primarily involved in government or political activity,” according to a joint statement released by the FBI and the U.S. cyber watchdog agency CISA.
The announcement confirmed the broad outlines of previous media reports that Chinese hackers were believed to have opened a back door into the interception systems used by law enforcement to surveil Americans’ telecommunications.
It follows reports Chinese hackers targeted telephones belonging to then-presidential and vice presidential candidates Donald Trump and JD Vance, along with other senior political figures, raised widespread concern over the security of U.S. telecommunications infrastructure.
Beijing has repeatedly denied claims by the U.S. government and others that it has used hackers to break into foreign computer systems.
Last month, a bipartisan group of U.S. lawmakers asked AT&T, Verizon Communications and Lumen Technologies to answer questions about the reporting hacking of the networks of U.S. broadband providers. REUTERS
We spend so much of our lives online but have we thought about what will happen to our digital trails and assets when we die?
It is a question that came up for husband-and-wife content creators Muhammad Alif Ramli and Liyana Syahirah Ismail Johari.
They realise, for example, if no clear instructions are left behind, not knowing the passwords or about dormant accounts on long-forgotten platforms can pose problems.
It is especially important, given Mr Alif’s medical history.
When Mr Alif was 10, he was diagnosed with rhabdomyosarcoma, a soft tissue cancer. He underwent multiple chemotherapy cycles and nine surgical operations, which the 28-year-old described as a “close-to-death experience”, before he recovered.
In the fourth episode of The Straits Times’ docuseries Let’s Talk About Death, Mr Alif and Ms Liyana, 27, seek help from experts to consolidate their digital assets.
They speak to a cyber security expert to find out how to best manage their passwords. They also talk to a lawyer who specialises in digital assets to look into protecting their social media accounts, which may generate revenue in the future.
Finally, Mr Alif and Ms Liyana also attempt to write their wills with the help of artificial intelligence tools, with the key question being: Will they be valid under syariah law?
Let’s Talk About Death is a five-episode docuseries that follows several millennials and their loved ones as they navigate end-of-life planning, and it starts honest conversations about death and dying well.
WASHINGTON – A previously confidential directive by Biden administration lawyers lays out how military and spy agencies must handle personal information about Americans when using artificial intelligence, showing how the officials grappled with trade-offs between civil liberties and national security.
The results of that internal debate also underscore the constraints and challenges the government faces in issuing rules that keep pace with rapid advances in technology, particularly in electronic surveillance and related areas of computer-assisted intelligence gathering and analysis.
The administration had to navigate two competing goals, according to a senior administration official Joshua Geltzer, the top legal adviser to the National Security Council, “harnessing emerging technology to protect Americans, and establishing guardrails for safeguarding Americans’ privacy and other considerations”.
The White House last month held back the four-page, unclassified directive when President Joe Biden signed a major national security memo that pushes military and intelligence agencies to make greater use of AI within certain guardrails.
After inquiries from The New York Times, the White House has made the guidance public. A close read and an interview with Mr Geltzer, who oversaw the deliberations by lawyers from across the executive branch, offers greater clarity on the current rules that national security agencies must follow when experimenting with using AI.
Training AI systems requires feeding them large amounts of data, raising a critical question for intelligence agencies that could influence both Americans’ private interests and the ability of national security agencies to experiment with the technology.
When an agency acquires an AI system trained by a private sector firm using information about Americans, is that considered “collecting” the data of those Americans?
The guidance says that does not generally count as collecting the training data – so those existing privacy-protecting rules, along with a 2021 directive about collecting commercially available databases, are not yet triggered.
Still, the Biden team was not absolute on that question. The guidance leaves open the possibility that acquisition might count as collection if the agency has the ability to access the training data in its original form, “as well as the authorisation and intent to do so.” NYTIMES
SINGAPORE– The Cyber Security Agency (CSA) is starting a study aimed at raising the productivity and professionalism of cyber-security workers.
It may result in an outline of the competencies required of chief information security officers – known by the acronym Cisos – and their teams of security executives who are in high demand, given their key role amid surging cyber attacks.
Ms Veronica Tan, CSA’s director at safer cyberspace division, told The Straits Times: “For organisations, clarity in standards and desired skills at various roles will mean greater improvements in workforce competency and productivity.”
The study will involve industry players, training institutions and certification bodies, she added.
CSA’s plan comes as companies warm to the idea of designated cyber-security personnel, but sometimes find themselves hindered by limited budgets and a shortage of skilled talent.
Mr Nyan Yun Zaw, the first Ciso at Singapore cyber security advisory firm Athena Dynamics, said: “The industry turnover rate for Cisos is unfortunately pretty high because it is a highly challenging and stressful job.
“When the organisation faces a security incident, this is the first person everyone looks to.”
Chief information security officer, a title that arose up in the 1990s after Citibank appointed one following a cyber attack, have risen in prominence in recent years as some countries made mandatory disclosures of material cyber breaches or attacks.
There have also been high-profile cases of criminal charges taken against such officers, such as at Uber and SolarWinds.
Mr Zaw took on the job at Athena Dynamics just over a year ago when his company expanded it beyond IT infrastructure and support.
His background was a string of roles ranging from engineering, cyber security, programming, to business development and sales in the firm since its set-up in 2014.
He added to his expertise by becoming a Certified Information Systems Security Professional, a label granted by the International Information System Security Certification Consortium, also known as ISC2.
He said: “We felt that there is a need to have a dedicated Ciso since we are also part of a listed company.”
Cisos spend their time securing their companies’ assets, learning new threats and technologies, and working with cross-functional teams, he said.
He added: “Ciso is a management position, so it is important for a Ciso to be knowledgeable in various aspects of cyber ranging from governance, risk and compliance to network security architectures.”
In the 12 months leading up to September, job portal Indeed recorded 48 per cent of its postings in Singapore seeking communication skills in cyber security leaders, compared to 38 per cent specifying expertise in IT, and 16 per cent in information security.
Around the same time, the number of postings for such roles on its portal dropped 36 per cent, suggesting that firms might be filling positions through internal promotions or team restructuring, said Mr Saumitra Chand, Indeed’s career expert.
“This decline may be due to the demanding nature of leadership positions like Cisos, which require high levels of expertise and specialisation,” he said.
To help small and medium-sized enterprises (SMEs) or non-profit organisations that cannot afford designated security personnel, CSA launched its CISO-as-a-Service (CISOaaS) scheme in February 2023.
It has received about 200 applications so far.
Organisations tapping the scheme can use CSA’s panel of 19 vendors to audit their cyber health and guide them to attain CSA’s Cyber Essentials and Cyber Trust marks, with up to 70 per cent subsidies.
CSA is planning updates to the two marks to reflect new risks in cloud, operational technology and Artificial Intelligence (AI), said Ms Tan.
Digital agency Digipixel, which has used CISOaaS, said achieving both trust marks helped it gain trust from customers.
Its director, Mr Leon Tan, said: “Pooled services can sometimes lack industry-specific context, but our collaboration with CSA has been a productive exchange.”
Mr Dave Gurbani, chief executive at CyberSafe, an appointed vendor, said: “We start by conducting a cyber-security health plan, like a doctor’s check-up.”
The firm then helps its mostly SME clients work through their internal controls, configurations, policies, and training to pass the audits for CSA’s marks.
“Many SMEs still think of cyber security in terms of anti-virus tools or maybe a firewall. To put it simply, that’s like thinking you’re ready for the day just because you have your socks and shoes on,” Mr Gurbani said.
Gaps that frequently show up include outdated systems, misconfigurations from third-party vendors, and weak access controls like shared passwords and lack of Multi-Factor Authentication.
“Without guidance, these vulnerabilities can be hard to recognise and fix,” Mr Gurbani added.
Another vendor, Momentum Z, takes firms calling on the CISOaaS service through a three-pronged assessment of employees’ cyber-security basics, company’s processes and policies, and cyber-security infrastructure such as firewall, antivirus, back-up data use and endpoint security.
Chief executive Shane Chiang said he has had clients that have not changed passwords for six years, or who had been granting external vendors remote access to their network with no inkling.
He said: “’Clients are often surprised to learn the vulnerabilities in their systems, which reinforces the importance of having a Ciso to bring structure and foresight into cyber security.”
CSA’s 2023 cyber security health survey released in March noted that only one in three organisations have fully implemented at least three of CSA’s five categories of recommended measures.
More organisations need help with knowing what data they have, where the data is stored and how to secure the data, CSA’s Ms Tan said. Businesses are also weak at safeguarding their systems and networks against malicious software, as well as guarding access to data and services.
She urged more organisations to tap CSA’s tools to up their game, adding: “Unless all essential measures are adopted, organisations are still exposed to unnecessary cyber risks, especially as they accelerate digitalisation and adopt fast-evolving technologies such as AI.
“Partial adoption of measures is inadequate.”
“Singapore Issues New Guidelines to Protect Businesses from AI Security Risks”
SINGAPORE – Rogue chatbots that spew lies or racial slurs may be just the beginning, as maliciously coded free chatbot models blindly used by businesses could unintentionally expose sensitive data or result in a security breach.
In new guidelines published on Oct 15, Singapore’s Cyber Security Agency (CSA) pointed out these dangers amid the artificial intelligence (AI) gold rush, and urged businesses to test what they plan to install rigorously and regularly.
This is especially crucial for firms that deploy chatbots used by the public, or those linked to confidential customer data.
Frequent system tests can help weed out threats like prompt injection attacks, where text is crafted to manipulate a chatbot into revealing sensitive information from linked systems, according to the newly published Guidelines on Securing AI Systems .
The guidelines aim to help businesses identify and mitigate the risks of AI to deploy them securely. The more AI systems are linked to business operations, the more they should be secured.
Announcing the guidelines at the annual Singapore International Cyber Week (SICW) at the Sands Expo and Convention Centre on Oct 15, Senior Minister and Coordinating Minister for National Security Teo Chee Hean said the manual gives organisations an opportunity to prepare for AI-related cyber-security risks while the technology continues to develop.
Mr Teo said in his opening address that managing the risks that come with emerging technology like AI is an important step to build trust in the digital domain. He urged the audience to learn lessons from the rapid rise of the internet.
“When the internet first emerged, there was a belief that the ready access to information would lead to a flowering of ideas and the flourishing of debate. But the internet is no longer seen as an unmitigated good,” he said, adding that there is widespread recognition that it has become a source of disinformation, division and danger.
“Countries now recognise the need to go beyond protecting digital system to also protecting their own societies,” he said. “We should not repeat these mistakes with new technologies that are now emerging.”
The ninth edition of the conference is being held between Oct 14 and 17 and features keynotes and discussion panels by policymakers, tech professionals and experts.
AI owners are expected to oversee the security of AI systems from development, deployment to disposal, according to CSA’s guidelines, which do not address the misuse of AI in cyber attacks or disinformation.
In a statement released on Oct 15, CSA said: “While AI offers significant benefits for the economy and society… AI systems can be vulnerable to adversarial attacks, where malicious actors intentionally manipulate or deceive the AI system.”
Organisations using AI systems should consider more frequent risk assessments than with conventional systems to ensure tighter auditing of machine learning systems.
The World Health Organisation (WHO) and some 50 countries issued a warning on Nov 8 at the United Nations about the rise of ransomware attacks against hospitals, with the United States specifically blaming Russia.
Ransomware is a type of digital blackmail in which hackers encrypt the data of victims – individuals, companies or institutions – and demand money as a “ransom” in order to restore it.
Such attacks on hospitals “can be issues of life and death,” according to WHO head Tedros Adhanom Ghebreyesus, who addressed the UN Security Council during a meeting on Nov 8 called by the United States.
“Surveys have shown that attacks on the healthcare sector have increased in both scale and frequency,” Dr Ghebreyesus said, emphasising the importance of international cooperation to combat them.
“Cybercrime, including ransomware, poses a serious threat to international security,” he added, calling on the Security Council to consider it as such.
A joint statement co-signed by over 50 countries – including South Korea, Ukraine, Japan, Argentina, France, Germany and the United Kingdom – offered a similar warning.
“These attacks pose direct threats to public safety and endanger human lives by delaying critical healthcare services, cause significant economic harm, and can pose a threat to international peace and security,” read the statement, shared by US Deputy National Security Advisor Anne Neuberger.
The statement also condemned nations which “knowingly” allow those responsible for ransomware attacks to operate from.
At the meeting, Ms Neuberger directly called out Moscow, saying: “Some states – most notably Russia – continue to allow ransomware actors to operate from their territory with impunity.”
France and South Korea also pointed the finger at North Korea.
Russia defended itself by claiming the Security Council was not the appropriate forum to address cybercrime.
“We believe that today’s meeting can hardly be deemed a reasonable use of the Council’s time and resources,” said Russian ambassador Vassili Nebenzia.
“If our Western colleagues wish to discuss the security of healthcare facilities,” he continued, “they should agree in the Security Council upon specific steps to stop the horrific… attacks by Israel on hospitals in the Gaza Strip.” AFP