#

Tech 5: Tesla Experiences Largest Recall to Date, Ledger Targeted in Cyber Attack

Tesla (NASDAQ:TSLA) saw over 2 million of its cars pulled from roads in the US this past week.

Meanwhile, new technology out of the University of Technology Sydney has the potential to give a voice to non-speaking patients, and Grindr (NYSE:GRND) has become the latest app to introduce generative artificial intelligence (AI).

For those stories and more, keep reading to learn about the latest news in tech.

1. NHTSA recalls over 2 million Teslas

Over 2 million Tesla vehicles in the US — nearly all of them that are on the road — have been recalled following a probe by the country’s National Highway Traffic Safety Administration (NHTSA).

Explaining the reason for the recall, the organization cited concerns about the electric vehicles’ Autosteer feature, which falls under the Autopilot umbrella. It believes there aren’t enough safety measures in place to prevent driver misuse.

Essentially, the NHTSA believes the software powering these elements is not equipped with enough safeguards to ensure that drivers are paying attention when using Autosteer to drive. Autosteer employs AI to maintain speed and distance between vehicles, and to steer and change lanes, but is not autonomous.

According to Reuters, the NHTSA has been investigating the safety of Autosteer since 2021 amid many reports of fatal car crashes involving the feature. Tesla said it does not agree with NHTSA, but it has agreed to the recall and is performing an over-the-air software update that will add more safety features to those already in use.

The recall also affects about 193,000 vehicles in Canada.

2. Ledger targeted in cyber attack

Crypto hardware and cold wallet manufacturer Ledger, a Paris-based decentralized finance startup, has become the latest cryptocurrency company to be targeted in a malicious malware attack.

According to a letter from Ledger CEO Pascal Gauthier, released on the company’s website, a former employee was targeted in a phishing scheme, and an attack was carried out on the company’s product Ledger Connect Kit, where users can store cryptocurrencies, through the addition of an exploit on Thursday (December 14).

The exploit was addressed after it had been online for less than two hours. The company has not disclosed how many wallets were affected, but Bloomberg claims that “hundreds of thousands of dollars” were stolen from user wallets.

3. Grindr ‘matches’ with generative AI in new collaboration

Grindr, an online dating network popular among the LGBTQ+ community, announced on Thursday that it will be partnering with Ex-human, a generative AI company that provides users with customizable chatbots.

The company’s chatbots differ from the one created by OpenAI’s ChatGPT, which focuses on answering questions and automating tasks like emails and fact checking; they are also different from Pi, which does all of those things and provides friendly, conversational dialogue. Instead, the Ex-human chatbots are meant to provide users with an ’emotionally fulfilling connection.’ In other words, the chatbots are designed to flirt.

Ex-Human Founder and CEO Artem Rodichev said in a press release, “We are incredibly excited to introduce Ex-Human’s models to the Grindr platform. Our technology has valuable applications to improve the user experience in dating, and we look forward to working with the Grindr team to help bring these features to life for the LGBTQ+ community.”

During an interview with Bloomberg, Grindr CEO George Arison referred to an as-yet-to-be-developed AI feature he called ‘Grindr Wingman.’ Like any good wingman, the tool could help take some of the stress out of dating by suggesting conversation prompts based on a love interest’s profile, generating responses, suggesting date locations and more.

4. New software can prevent voice deepfakes

As AI capabilities have advanced, it has become increasingly difficult to distinguish between real and deepfake human speech. Audio clips of a person’s voice can easily be cloned and used to spread harmful, malicious or self-incriminating misinformation, which has enormous implications for fake news, among other things.

However, a student out of the McKelvey School of Engineering at Washington University in St. Louis, Missouri, has come up with a new tool designed to protect a person’s voice from being cloned. The product, called AntiFake, was revealed to the Association for Computing Machinery at the Conference on Computer and Communications Security, held in Denmark on November 27. Scientific American published a feature on the software on Thursday.

According to Ning Zhang, the computer scientist and engineer who developed the tool, it works by distorting the audio clip just enough to make it unusable for a voice clone, but not enough that it’s unintelligible for human ears.

5. New system transforms thoughts into text

The University of Technology Sydney (UTS) has unveiled a new device created by the GrapheneX-UTS Human-centric Artificial Intelligence Centre. The team, led by Professor CT Lin, has developed a lightweight, portable and non-invasive tool that can record electrical activity in the brain and convert it into text, which can be displayed on a screen.

A video demonstration can be found here.

The device employs an electroencephalogram, also known as an EEG, which is a thin, net-like cap that tracks the electrical activity of the brain through the scalp. The patterns of the waves are then categorized and fed into an AI model called DeWave, developed by the researchers, which translates the waves into words and phrases.

This groundbreaking technology has the potential to revolutionize communication and interaction for individuals with speech loss or communication disorders.

Securities Disclosure: I, Meagen Seatter, hold no direct investment interest in any company mentioned in this article.

This post appeared first on investingnews.com