Tuesday, 2 May 2023

Cancer scare: Why we banned Indomie noodles —NAFDAC

 

The National Agency for Food, Drug Administration and Control, NAFDAC, said it will commence random sampling of Indomie noodles, including the seasoning, from the production facilities.


This is coming on the heels of the recalling of indomie noodles by Taiwan and Malaysian authorities, following the discovery of ethylene oxide, a cancer-causing agent.


Announcing this in a statement, the Director-General of the agency, Prof. Mojisola Adeyeye, who explained that the compound of interest was ethylene oxide said, already, the Director of the Food Lab Services Directorate has been engaged and has started working on the methodology for the analysis.


Adeyeye said: “Indomie noodles have been banned from being imported into the country for many years. It is one of the foods on the government prohibition list. It is not allowed in Nigeria, and therefore not registered by NAFDAC.


“What we are doing is an extra caution to ensure that the product is not smuggled in, and if so, our post-marketing surveillance would detect it. We also want to be sure that the spices used for the Indomie and other noodles in Nigeria are tested.


“That is what NAFDAC Food Safety and Applied Nutrition, FSAN, and Post Marketing Surveillance, PMS, are doing this week at the production facilities and in the market, respectively.”


She, however, promised that Nigerians will be duly updated with the outcomes of the investigation.

According to the World Health Organisation, WHO, ethylene oxide is a colourless, highly reactive and flammable gas widely used as an intermediate in the production of various chemicals.


WHO, in a report, noted that findings from animal investigations, test systems, and epidemiological findings suggested an increase in the incidence of human cancer.


It also added that the report concludes that ethylene oxide should be considered a probable human carcinogen and that its levels in the environment should be kept as low as feasible.


Source: Vanguard online

Wednesday, 15 February 2023

ChatGPT

Since OpenAI released its blockbuster bot ChatGPT in November, users have casually experimented with the tool, with even Insider reporters trying to simulate news stories or message potential dates. 

To older millennials who grew up with IRC chat rooms — a text instant message system — the personal tone of conversations with the bot can evoke the experience of chatting online. But ChatGPT, the latest in technology known as "large language model tools," doesn't speak with sentience and doesn't "think" the way people do. 

That means that even though ChatGPT can explain quantum physics or write a poem on command, a full AI takeover isn't exactly imminent, according to experts.

"There's a saying that an infinite number of monkeys will eventually give you Shakespeare," said Matthew Sag, a law professor at Emory University who studies copyright implications for training and using large language models like ChatGPT.


"There's a large number of monkeys here, giving you things that are impressive — but there is intrinsically a difference between the way that humans produce language, and the way that large language models do it," he said. 

Chat bots like GPT are powered by large amounts of data and computing techniques to make predictions to string words together in a meaningful way. They not only tap into a vast amount of vocabulary and information, but also understand words in context. This helps them mimic speech patterns while dispatching an encyclopedic knowledge. 

Other tech companies like Google and Meta have developed their own large language model tools, which use programs that take in human prompts and devise sophisticated responses. OpenAI, in a revolutionary move, also created a user interface that is letting the general public experiment with it directly.

Some recent efforts to use chat bots for real-world services have proved troubling — with odd results. The mental health company Koko came under fire this month after its founder wrote about how the company used GPT-3 in an experiment to reply to users. 


Koko cofounder Rob Morris hastened to clarify on Twitter that users weren't speaking directly to a chat bot, but that AI was used to "help craft" responses. 

The founder of the controversial DoNotPay service, which claims its GPT-3-driven chat bot helps users resolve customer service disputes, also said an AI "lawyer" would advise defendants in actual courtroom traffic cases in real time, though he later walked that back over concerns about its risks. 

Other researchers seem to be taking more measured approaches with generative AI tools. Daniel Linna Jr., a professor at Northwestern University who works with the non-profit Lawyers' Committee for Better Housing, researches the effectiveness of technology in the law. He told Insider he's helping to experiment with a chat bot called "Rentervention," which is meant to support tenants. 

That bot currently uses technology like Google Dialogueflow, another large language model tool. Linna said he's experimenting with Chat GPT to help "Rentervention" come up with better responses and draft more detailed letters, while gauging its limitations.


"I think there's so much hype around ChatGPT, and tools like this have potential," said Linna. "But it can't do everything — it's not magic."  

OpenAI has acknowledged as much, explaining on its own website that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers."  

Read Insider's coverage on ChatGPT and some of the strange new ways companies are using chat bots: 

The tech world's reception to ChatGPT:
Microsoft is chill with employees using ChatGPT — just don't share 'sensitive data' with it.

Microsoft's investment into ChatGPT's creator may be the smartest $1 billion ever spent

ChatGPT and generative AI look like tech's next boom. They could be the next bubble.

The ChatGPT and generative-AI 'gold rush' has founders flocking to San Francisco's 'Cerebral Valley'

Insider's experiments: 
I asked ChatGPT to do my work and write an Insider article for me. It quickly generated an alarmingly convincing article filled with misinformation.

I asked ChatGPT to reply to my Hinge matches. No one responded.

Developments in detecting ChatGPT: 
Teachers rejoice! ChatGPT creators have released a tool to help detect AI-generated writing

A Princeton student built an app which can detect if ChatGPT wrote an essay to combat AI-based plagiarism

ChatGPT in society: 
BuzzFeed writers react with a mix of disappointment and excitement at news that AI-generated content is coming to the website

ChatGPT is testing a paid version — here's what that means for free users

A top UK private school is changing its approach to homework amid the rise of ChatGPT, as educators around the world adapt to AI

Princeton computer science professor says don't panic over 'bullshit generator' ChatGPT

DoNotPay's CEO says threat of 'jail for 6 months' means plan to debut AI 'robot lawyer' in courtroom is on ice

It might be possible to fight a traffic ticket with an AI 'robot lawyer' secretly feeding you lines to your AirPods, but it could go off the rails

Online mental health company uses ChatGPT to help respond to users in experiment — raising ethical concerns around healthcare and AI technology

ChatGPT is coming for classrooms, hospitals, marketing departments, and everything else as the next great startup boom emerges.