Junie AI Code Assistant for WebStorm: The Good, the Bad, and the Ugly
A short comparison with Claude Code and other AI assistants
I've been a long-time WebStorm user, so when JetBrains quietly dropped Junie AI this spring into my IDE, I was more than ready to give it a try versus my usual mode of cut-n-paste from ChatGPT, Claude, or Gemini. I’ve been coding in JavaScript/TypeScript for over a decade. While tools like Cursor and Windsurf are getting a lot of hype, I didn’t feel like recreating a whole new DEV experience from what I’m accustomed to with JetBrains Webstorm just to get access to an AI pair programmer. The good news is that Junie AI integrates directly into my existing toolchain and gives me the power of large language models without the friction of switching to a new editor or DEV stack.
After a month of near-daily usage, I’ve come to see Junie not as just another AI code tool but as a legit force multiplier, especially when used the right way. It has its strengths, its blind spots, and its quirks. So let’s break it down the way developers (and Clint Eastwood) like with: the good, the bad, and the ugly.
The Good
1. Deep Codebase Awareness
Junie can intelligently parse and reason through your entire repo, even a mono-repo with multiple modules. That includes scoping down to a specific file(s), focusing on key modules, or reviewing only the parts you’ve touched in your branch. It respects .gitignore
, interrogates your project structure, and treats context like a first-class citizen.
This is especially useful when you’re parachuting into a repo you didn’t write or wrote a few years ago. Whether it’s a microservice that someone built and bailed on or an old legacy monolith with seven layers of tech debt, Junie helps flatten the learning curve to get you productive in a matter of minutes.
2. Git-Aware Comparisons
One of my favorite features is how it leverages git diff
. You can ask Junie to compare your working branch to main
or another branch and summarize what changed, why it matters, and where you might have introduced risk. It’s like having a second reviewer who can double-check your assumptions.
3. Unit Test Automation
Let’s be real: nobody likes writing unit tests. But Junie actually makes it bearable, dare I say, enjoyable. It’ll scaffold Jest or Mocha tests that are shockingly good. In a few sessions, I was able to push code coverage from 0%, in some projects, to nearly 100% on some of our more complex modules. It even calls out edge cases you might’ve missed and gives you mocks that make sense.
Here is the integration with the IDE, where you can see the plan it devises from your question on the left side and the implementation and Chain of Thought (CoT) on the right. You can then see what changed and have an opportunity to rollback, start a new task, or ask more questions.
4. Self-Testing Suggestions
Junie doesn’t just dump code and hope for the best. When it suggests changes, it’ll attempt to verify its own output by running your test suite (if it’s available) or writing its own tests to verify its changes. It will also check against your lint rules, check coverage, and make improvements if it’s not hitting your linting and coverage goals. This kind of proactive checking sets it apart from many of the other AI tools, which are basically glorified autocompletes.
5. Developer Onboarding Supercharger
This has been a game-changer for onboarding. New team members (or even experienced devs jumping onto a new service) can use Junie to understand workflows, dependency chains, and side effects within a codebase without deciphering code or hunting through out-of-date Confluence pages or Slack threads from six months ago.
6. Contextual Awareness Within Your IDE
Since it’s built into JetBrains tools like WebStorm, IntelliJ, and PyCharm, it understands the structure of your project, your modules, your imports, and even your open tabs. It doesn’t treat your repo like a blob of text. It navigates it like a developer would. By watching its CoT, I can see it making logical predictions on where to look next in the code and find the associated classes from another class, just like you would.
The Bad
1. Doesn’t Always Respect Framework Conventions
Here’s a real example. I use NestJS regularly for microservices. It has built-in support for caching via its cache-manager module, which works great when paired with Redis. Junie, however, decided to implement a custom caching solution from scratch. It technically worked, but it ignored a first-party module designed to solve this exact problem.
Here’s a sample of the code it generated, which you’d typically configure in the cache-manager module set-up with NestJS.
@Injectable()
export class CacheService implements OnModuleInit, OnModuleDestroy {
private readonly logger = new Logger(CacheService.name);
private redis: Redis;
async onModuleInit() {
try {
this.redis = new Redis({
host: environment.REDIS.HOST,
port: environment.REDIS.PORT,
password: environment.REDIS.PASSWORD,
db: environment.REDIS.DB,
keyPrefix: environment.REDIS.KEY_PREFIX,
maxRetriesPerRequest: 3,
lazyConnect: true,
reconnectOnError: (err) => {
const targetError = 'READONLY';
return err.message.includes(targetError);
}
});
// Connect to Redis
await this.redis.connect();
this.logger.log('Successfully connected to Redis');
// Set up event listeners
this.redis.on('error', (error) => {
this.logger.error('Redis connection error:', error);
});
this.redis.on('connect', () => {
this.logger.log('Redis connected');
});
this.redis.on('ready', () => {
this.logger.log('Redis ready');
});
this.redis.on('close', () => {
this.logger.warn('Redis connection closed');
});
this.redis.on('reconnecting', () => {
this.logger.log('Redis reconnecting...');
});
} catch (error) {
this.logger.error('Failed to connect to Redis:', error);
throw error;
}
}
}
This kind of framework myopia isn’t uncommon. Junie is brilliant at generating code, but it sometimes lacks awareness of best practices or community conventions, especially when it’s pulling general patterns instead of framework-specific approaches.
2. Doesn’t Always Write Idiomatic Code
The output is usually good but not perfect. Sometimes, it gets too verbose, glosses over typing and proper exception handling, or uses anti-patterns you’d catch in a code review. It’s not reckless, but it definitely benefits from a seasoned dev keeping an eye on it. I will sometimes prompt it after it’s found a workable solution to review the code as a world-class senior developer of X technology/framework, and review it for any optimizations or improvements.
3. Lacks Memory Across Projects
Right now, Junie doesn’t retain memory across projects or sessions. That means you’ll often re-explain the same architecture decisions if you’re working across multiple services or codebases. It’s not a dealbreaker, but it’s definitely something to be aware of if you’re expecting long-term reasoning. Even within a conversation session, the context length may be constrained, so try to keep tasks discrete.
The Ugly
1. Context Pile-Up
This is where things can spiral. If you stay in the same prompt window too long, Junie keeps building on its own previous assumptions, as one would expect with a large context window like Sonnet has (200K tokens). If the initial implementation was flawed or missing something, you end up with multiple layers of patchwork fixes. That can easily devolve into spaghetti code if you’re not careful.
The fix? Start a new session when things feel off. Treat each problem as its own isolated task. Sometimes, I’ll just open a new prompt anyway and ask it to use git diff
to review the changes and make new improvements. The bottom line is that it’s best to work with a discrete problem, resolve that, commit, and then move on to the next issue.
2. Abstract Problems = Painful Loops
Junie shines when you ask it to write a function, add tests, or refactor something specific. But when you’re dealing with architectural problems, cross-cutting concerns, or vague business logic, it starts to wobble. The iterations get longer, and the value drops with each prompt unless you course-correct hard.
In these situations, I like to use Junie more as a guide and move from “Code” mode to “Ask” mode. In this mode, I can have a conversation about the code and make decisions based on its response to break the problem down into workable parts to formulate a strategy.
3. You Will Want to Use Rollback
Fortunately, Junie gives you a "rollback" feature when the going gets tough. This feature lets you nuke AI changes and revert to your previous code. You can even rollback multiple answers if you want. This has saved me more than once from junk output that looked promising at first but quickly unraveled. It's a very helpful feature!
Junie vs. the Competition
While I was experimenting with Junie, I also tried Claude Code. Claude Code installs as an npm package and runs directly from the terminal. In my experience, Claude Code produces code that aligns well with framework best practices. Junie, on the other hand, offers full IDE integration, including features like Chain of Thought, session rollback, and detailed change history. Both tools run on the Sonnet 4 engine, but I found Junie’s workflow features more useful for daily development in WebStorm. That said, it’s easy enough to keep Claude Code in my IDE’s terminal and use it when I’m having issues with Junie’s results or run out of tokens, which hasn’t happened yet.
Note: Junie defaults to Sonnet 3.7, but you can switch to Sonnet 4, but I’m sure JetBrains will add more models in the future. Anthropic’s Claude models are especially strong when it comes to reasoning through complex logic and generating structured, readable code. In my experience, Claude Code edges out Cursor and Copilot in raw problem-solving. OpenAI o3 is no slouch either—it’s neck-and-neck with Claude depending on the task. I often use it for great Terraform output.
As for Cursor and Windsurf, it’s not totally “free.” While the editor is free to download, it doesn’t include unlimited AI usage. You’ll need to bring your own Anthropic or OpenAI API keys, and tokens add up fast. For serious daily dev work, that cost can blow past Junie’s flat-rate pricing quickly. I did upgrade from Pro to Ultimate, which gives you 40x more tokens.
Final Verdict
Junie AI has made me significantly more productive, especially when I’m working on code I didn’t write. It cuts down time spent tracing through unfamiliar logic, lets me rollout features faster, and reduces my dependency on other devs to get unblocked. That’s a big deal, especially on lean teams or when working across multiple services.
It’s also cost-effective. The Junie AI Ultimate license is just $20/month, and I haven’t run out of tokens once. The IDEs themselves are a steal. You can grab WebStorm or PyCharm for less than $240 for five seats under their start-up program. That’s a rounding error for most startup budgets.
It’s not perfect. You still need to think. You still need to review the output. But Junie fits naturally into the flow of development without feeling like a drag. It doesn’t get in the way, and it makes the right things faster without creating (mostly) new headaches (as long as you use it intentionally). And when you get in a bind, you can bounce it against other AI assistants like Claude Code, Gemini, or OpenAI.
If you're already in the JetBrains ecosystem, adding Junie is a no-brainer. If you’re not, it might be the reason you switch, along with all the other benefits that WebStorm brings, just as a general IDE.
I highly recommend checking it out.