China lunch new AI model GLM-47 with Open Source which give major AI Breakthrough A new open source model called GLM-4.7 has just arrived, and it’s turning heads
by performing almost as well as some of the best closed systems on tough tasks like coding and using tools.This model comes from Zhipu AI,
a company pushing hard in the AI space. Released on December 22, 2025, GLM-4.7 focuses on helping with real coding work, where the AI needs to plan steps, fix bugs, and stay on track over long sessions. Many people are excited because it brings high level abilities without locking everything behind paywalls.
What Makes GLM-4.7 Stand Out
GLM-4.7 builds on earlier versions with big improvements in areas that matter for developers. For example, it handles complex coding challenges better. On a test called SWE-Bench Verified, which checks if a model can fix real issues in code repositories, it scored 73.8 percent.
That’s a strong result for an open model, meaning it can read code, find problems, and make changes that actually work.Another test, LiveCodeBench, looks at everyday coding skills like handling edge cases. Here, GLM-4.7 reached 84.9 percent. It also did well in multilingual coding,
scoring 66.7 percent on a version of SWE-Bench that includes different languages.What really helps in practical use is how it works with agents. Agents are setups where AI breaks down big tasks, uses tools like browsers or terminals, and keeps going without getting confused.
GLM-4.7 shines here because it has special thinking modes.One mode lets it reason step by step before acting. Another preserves that reasoning across multiple turns, so it doesn’t forget plans midway. There’s even control to turn thinking on or off depending on
the task—quick for simple stuff, deep for hard ones. This makes long workflows more stable. In terminal tasks, which are tricky because commands must run in the right order, it jumped to 41 percent on Terminal Bench 2.0.With tools enabled, it scored 42.8 percent on a tough exam-style benchmark,
a big leap forward. On web browsing tasks, it hit the mid-60s with good context handling. These gains show it’s built for real agent systems, like those used in coding environments.
Developers are already trying it in tools that support swapping models. It’s available through APIs and can run locally for those with strong hardware. The open weights mean anyone can customize it or deploy it freely, which is a huge plus for teams wanting control.
How It Compares to Other Models
GLM-4.7 often beats or matches other open models like DeepSeek V3.2 and Moonshot’s Kimi K2 on coding benchmarks. In some areas, it comes close to closed leaders like Claude Sonnet 4.5 or Gemini 3.0 Pro.
For instance, in math competitions like AIME 2025, it performed at the top among open sources. Community tests show it’s reliable for multi-step coding, though some
say it’s not quite at the absolute frontier for every scenario. Still, the balance of strong performance and openness makes it a favorite for many building agent workflows. Read more
| Google Leads the AI Shift and Disney Introduces Mickey Mouse to Artificial Intelligence | Google Leads the AI Shift and Disney Introduces Mickey Mouse to Artificial Intelligence |
|---|

Another Cool Update in AI Image Tools
Around the same time, there’s buzz about better ways to edit AI-generated images. Tools like Manus have added features for precise changes. You can generate a picture, then select parts to tweak—like changing a color or fixing text—without messing up the whole thing.
This uses advanced models, possibly tied to high-quality generators like Google’s Nano Banana Pro, known for realistic interiors and clean outputs. The idea is to make iteration faster: refine instead of starting over.
For slides or designs, you can edit elements directly, apply changes across multiple items, and keep consistency.It’s available on web and mobile,
helping teams make quick fixes. Ownership stays with the user, clear for personal or work use. This shift turns image AI from random tries into a real editing workflow.
Why These Updates Matter for the Future
China’s AI scene is moving fast, with open models closing gaps quickly. GLM-4.7 shows how far open source has come in agentic tasks, making advanced coding help more accessible.
Paired with smarter image editing, these tools let more people create and build without huge barriers.As discussions on platforms like X show, developers are testing GLM-4.7 in real projects, often praising its stability in long sessions. Trends point to growing interest in Chinese open AI,
with models gaining global use.These steps highlight a broader story: AI is becoming more practical and open, helping everyday creators and coders tackle bigger ideas.
FAQ
What is GLM-4.7?
GLM-4.7 is an open source AI model from Zhipu AI, released in late 2025. It excels in coding, reasoning, and agent tasks, with features like preserved thinking for better consistency in long workflows.
Is GLM-4.7 better than DeepSeek or Kimi?
It often outperforms DeepSeek V3.2 and Kimi K2 on key coding benchmarks like SWE-Bench and LiveCodeBench. Results vary by task, but it’s competitive among open models.
Can I run GLM-4.7 on my own computer?
Yes, since it’s open weights. You can download from places like Hugging Face and use frameworks like vLLM, though it needs strong hardware for full performance.
What are the new image editing features mentioned?
Tools like Manus now allow precise local edits on generated images, preserving style while changing details. This makes refining visuals easier without full regenerations.














