Scams related to ChatGPT are on the rise and despite OpenAI offering users a free version of ChatGPT, scammers lead victims to fraudulent websites and claim that they need to pay for these services, a report said on Thursday. need to pay. There is another significant risk of using these chatbots.
“They can collect and steal the input you provide. In other words, providing anything sensitive or confidential can put you at risk. You might be more likely to give wrong answers or misleading information, according to researchers at Palo Alto Networks Unit 42. Chatbot’s responses can be manipulated for.
Unit 42 saw a 910 percent increase in monthly registrations for chatGPT-related domains between November 2022-April 2023.
The researchers also detected over 100 daily detections of malicious URLs related to ChatGPT captured from traffic observed in their advanced URL filtering system. They observed an almost 18,000 percent increase of squatting domains from the DNS security log within the same time frame. Scammers may use social engineering related to ChatGPT for identity theft or financial fraud.
It states, “Fake ChatGPT sites try to entice victims into providing their confidential information, such as credit card details and email address.”
Some scammers are taking advantage of the growing popularity of OpenAI for crypto fraud, using Elon Musk’s name to lure victims to fraudulent crypto giveaway events.
“Whether they are offered for free or not, these copycat chatbots are not trustworthy. Many of them are actually based on GPT-3 (released June 2020), which is less robust than the more recent GPT-4 and GPT-3.5. is powerful,” the report noted.
“To stay safe, ChatGPT users should exercise caution with suspicious emails or links related to ChatGPT. Furthermore, the use of copycat chatbots would bring additional security risks. Users should always access ChatGPT through the official OpenAI website,” the report stressed.
– IANS
The post ChatGPT-related scams escalate, fake AI chatbot apps boom The appeared first on Techlusive.