Anthropic Limits Access to New AI Model

Introduction

Anthropic has a new AI model called Claude Mythos. This AI can find problems in computer security. Anthropic does not want everyone to use it. They want to keep it safe.

Main Body

Claude Mythos can find holes in computer systems. It can do this on many different computers. A group in the UK tested the AI. They found that the AI can plan and do cyber attacks. Anthropic created Project Glasswing. Only 40 big companies can use the AI. These companies include Microsoft, Apple, and Amazon. They use the AI to fix their own security problems. Some people used the AI without permission. They found a way into the system through another company. Anthropic is checking this problem. Banks in India, Australia, and the UK are also looking at the AI. They want to know if the AI is dangerous for money systems. The US government is also talking about this AI. Anthropic does not let Chinese companies use the model.

Conclusion

Claude Mythos shows that AI is very powerful. It can help people, but it can also be dangerous. Companies and governments must work together to keep the internet safe.

Vocabulary Learning

dangerous
Something that can cause harm.危險的
Example:Fire is dangerous.
find
To discover or locate something.發現
Example:I find my keys on the table.
problem
A situation that causes difficulty.問題
Example:There is a problem with my computer.
safe
Not in danger; protected.安全的
Example:The money is safe in the bank.
test
To try something to see if it works.測試
Example:The teacher will test the students tomorrow.