Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.
Hackers stole the data of more 700,000 current and former customers and employees of Patelco Credit Union in a monthlong ransomware attack detected in June, the California financial institution said. The breach didn't equally affect all 726,000 individuals victimized by the attack.
Chat app Slack patched a vulnerability in its artificial intelligence tool set that hackers could have exploited to manipulate an underlying large language model to phish employees and steal sensitive data. Slack said it was a low-severity bug.
This week, Binance, ASX and Google sued; Solana users targeted; McDonalds' X account hacked; Mango Markets and SEC settled; China updated AML law; sentencing in the HTSB case; arrest in the BitConnect case; Australia shuttered 615 scams; Malaysia adopted Worldcoin, arrested crypto thieves.
California state lawmakers watered down a bill aimed at preventing artificial intelligence disasters after hearing criticism from industry and federal representatives. The bill still faces opposition from Silicon Valley and Democratic lawmakers.
This week, FTX settled with the CFTC, the Mango Markets hacker sought dismissal of charges, WazirX said it will reverse trades, Solana fixed a vulnerability, the SEC sued NovaTech and settled with Ideanomics, and researchers discovered a new way to steal crypto private keys.
The widespread use of generative artificial intelligence has brought on a case of real life imitating art: Humans have begun to bond with their AI chatbots. Such anthropomorphism - treating an object as a person - is not a total surprise, especially for companies developing AI models.
U.S. law enforcement charged two alleged masterminds of one of the largest Russian-language cybercrime forums after they claimed asylum inside the United States and lived a luxurious life in Miami. Federal agents obtained an image of the server hosting the forum.
Every week, ISMG rounds up cybersecurity incidents in digital assets. This week, a $12M white hat hack on Ronin Bridge, Cryptonator indictment, potential prison sentence in Crypto.com case, a $212K Convergence hack, Do Kwon's extradition, and the FBI published a scam warning.
The delay in the rollout of Nvidia's artificial intelligence chips could slow the rapid pace of AI development but is unlikely to cause a significant setback for the chip giant or its customers. The company delayed the release of its Blackwell B200 AI chips at least three months due to design flaws.
Human error is a major contributor to payments fraud, but only about 5% of organizations have fully automated their payment processes to reduce mistakes. Experts say artificial intelligence-enabled automation will help reduce risks, but the benefits of this technology are still a distant reality.
OpenAI is "excited" to provide early access to its next foundational model to a U.S. federal body that assesses the safety of the technology, founder Sam Altman said on Thursday. OpenAI earlier essentially disbanded a "superalignment" security team set up to prevent AI systems from going rogue.
Semiconductor designer Nvidia is reportedly the subject of two separate U.S. Department of Justice antitrust probes, focused on its acquisition of an Israeli artificial intelligence company and the chip giant's alleged anti-competition business practices.
This week, a Ukrainian was murdered over three bitcoins, FTX's Salame sought to postpone sentencing over a dog bite, Russian speakers drove crypto cybercrime, the FTC fined a Coinbase Group company $4.5 million, and Hong Kong police arrested scammers.
The United States government gave a cautious blessing for unrestricted access to open artificial intelligence foundation models, warning that users should be prepared to actively monitor risks. Open-weight AI models are essentially ready-to-use molds for developers to build applications on.
A machine learning model that Meta released last week to prevent prompt injection attacks is vulnerable to prompt injection attacks, researchers said. There is as yet no definitive solution to the problems of jailbreaking and prompt injection attacks.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.com, you agree to our use of cookies.