Welche AI und zu welchem Preis?

Hallo an Alle,

ich wüsste sonst nicht,wo ich eine Frage stellen könnte.

Mir geht es darum eine AI effizient nutzen zu können. Ich nutze momentan die webbasierte Claude 4.5 Version in der Free-Version. Doch die Chats für ein Coding sind oft so kurz,dass ich wieder mal um einige Stunden warten muss,um weiter machen zu können.

Was ich möchte: Wissen, welche AI wäre passend,damit ich länger mit einer AI chatten kann,also mehr nutzen kann.

Problem bei Claude ist so,dass viele Nutzer meinen,auch die Bezahl-Version ist kaum besser als die Free Version,also auch sehr kurze Nutzungs-Intervalle.

Vermeiden will ich auf jeden Fall aus Datenschutzgründen Gemini und Chatgpt,sie sind ein No-Go für mich.

Was mache ich mit der AI: Gebe ihm Aufageben einen Code zu schreiben,dabei gebe ich ihm Informationen,was er verbessern soll oder ändern soll und daher werden die Chats immer länger und eine Art Timeout kommt zur Folge.

Eine LLM auf meinen Laptop macht eher keinen Sinn,zu langsam und mühsam.

Könnt ihr mir da Ratschläge geben? Vielleicht Minimax2 M2? Bei claude bleiben und eine Bezahlversion nehmen?

Danke und Gruß…

1 Like

Hope it helps you make a comparison…


Given your constraints (no ChatGPT, no Gemini, no local LLM, strong privacy, long coding chats), the cleanest setup is:

  1. Make Proton Lumo Plus your main “all-day” coding/chat assistant.
  2. Optionally add MiniMax M2 as a cheap, very strong coding API for heavy tasks.
  3. Use Claude only if you accept US hosting and hard usage caps, and only after disabling training in settings.
  4. Optionally consider Mistral Le Chat Pro as a second privacy-friendly assistant if you want a more mainstream model than Lumo.

Below is the detailed comparison, with context and prices.


1. Why Claude free feels “so short”

There are two separate technical limits:

  1. Session / rate limits

    • Claude’s Pro plan is documented at about 45 messages every 5 hours for typical text chats. (Frank, North.)
    • Free is roughly 5× lower, so you get something like a handful of messages per 5-hour window before being throttled. Anthropic only says “Pro is at least 5× free per session,” but does not publish exact free numbers. (Claude Support Center)
    • Since coding prompts tend to be long and include code blocks, they consume more quota per message. So you hit the wall fast.
  2. Context window / “conversation too long”

    • Every model has a limited context (number of tokens it can see at once).
    • When the conversation plus your code exceed that limit, the system must summarize or truncate. That can look like a timeout, or like the model “forgetting” the earlier part of the conversation.

So with Claude free, you hit:

  • A small message budget per 5 hours, and
  • Context limits on top of that.

Claude Pro improves the first, but does not remove either.


2. At a glance: which AI, what price, and how it fits you

2.1 High-level recommendation

For your constraints:

  • Best “main assistant” for long, private coding chats
    Proton Lumo Plus at about $12.99 / month (or ~€12.99). (galaxus.at)

  • Best “power tool” when you need maximum coding power per dollar
    MiniMax M2 via an API gateway (OpenRouter or similar), at around $0.15–0.30 per million input tokens, $0.60–1.20 per million output tokens depending on provider. (LLM Stats)

  • Optional extra privacy-friendly general assistant
    Mistral Le Chat Pro at about $14.99 / month if you want a second EU-based assistant, with strong privacy and higher limits than its free tier. (DataCamp)

  • Claude Pro

    • Best pure coding quality in your list, but
    • US-hosted, trains on chats by default unless you switch that off, and still has explicit 5-hour and weekly caps. (Frank, North.)
    • Only choose it if you accept those trade-offs.

3. Comparison table

3.1 Core comparison

Service Pricing (personal) Usage limits for chat/coding Privacy / data use Coding strength (rough) Fit for you
Claude 4.5 Free $0 Small, dynamic per-day and per-5-hour caps, lowest tier; exact numbers not public, Pro is “≥5×” this. (Claude Support Center) Consumer plan, chats can be used for training by default, with up to 5-year retention if you allow training; 30-day retention if you disable training. US-based. (anthropic.com) Very strong coding model (Claude Sonnet 4.5) but heavily constrained in free tier. Sweet spot for short sessions, not long coding. (Medium) Poor for your case; exactly the limits you complain about.
Claude Pro (Sonnet 4.5) ~$20 / month About 45 messages / 5 hours, plus new weekly caps from Aug 2025. More than free, but still finite and enforced. (Frank, North.) Same privacy model as free: configurable training toggle, 5-year vs 30-day retention, US cloud. (anthropic.com) Near-frontier coding performance; ~77.2% on SWE-Bench Verified (real GitHub issues). (Medium) Technically excellent but still capped and not aligned with strong data-protection preferences unless you accept US storage and manage settings carefully.
Proton Lumo Plus ~$12.99 / month (≈ €12.99; ≈ €9.99/mo annually) (galaxus.at) Marketed as unlimited chats for individuals, with larger files and extended history. No 45-messages-per-5h style cap. (eduearnhub.com) Zero logs, zero-access encrypted chat history, no training on chats, EU infrastructure, no partnerships with US/Chinese AI vendors. (Proton) Uses a mix of open models tuned by Proton; good at everyday coding, summarization, planning; not top of raw SWE-Bench rankings but designed for safe, private assistance. (Proton) Excellent match if privacy and long, continuous usage matter more than absolute peak coding benchmarks.
Mistral Le Chat Pro ~$14.99 / month, with some cheaper student/discount options. (DataCamp) Pro is advertised as 6× free usage and “unlimited chats” within fair-use. Free is around 20–25 messages/day, so Pro is roughly 120–150+ per day. (help.mistral.ai) EU-based, conversations are excluded from training by default; memory is opt-in; has “incognito” mode; external analyses rank it as least privacy-invasive mainstream chatbot. (Data Studios ‧Exafin) Backed by strong Mistral models (including coding-oriented ones), fast and competitive general assistant. Good at code but less heavily benchmarked on SWE-Bench than Claude/M2. (Tom’s Guide) Very good fit if you want a mainstream, EU, privacy-conscious assistant with higher usage and can accept non-zero logging.
MiniMax M2 (API) Pay-as-you-go: ~$0.15–0.30 / 1M input tokens, $0.60–1.20 / 1M output tokens depending on provider. (LLM Stats) Effectively no message cap; you are limited by token budget and provider rate limits. Context windows of hundreds of thousands of tokens (some providers advertise up to multi-million). (OpenRouter) Model weights are open under a modified MIT-style license, so you can self-host for full control. If you use third-party APIs, data passes through MiniMax (China-based) or gateways, with varying policies. (ModelScope) Designed for coding and agents; around 69.4% on SWE-Bench Verified, near top proprietary models, very strong for real coding tasks. (Medium) Excellent as a cheap, powerful coding engine, but not a polished “out of the box” chat UI. Privacy depends on how and where you run it.

4. Service-by-service analysis for your situation

4.1 Claude Pro: strong but capped and US-hosted

What you gain compared to free

  • Price: about $20/month. (Frank, North.)
  • Session limits: around 45 messages per 5-hour window for normal Pro usage, which is roughly 5× free. (Frank, North.)
  • Weekly caps: starting August 2025, additional weekly usage limits apply, especially for the highest models. (ainativedev.io)

This will feel much better than free if your current free window dies after 10–15 long messages. You could often do a full 1–2 hour coding session before hitting the limit.

Coding quality

  • Sonnet 4.5 is one of the strongest coding models available.
  • On SWE-Bench Verified (real GitHub issues) it scores about 77.2% solved, which puts it at or near the top of all public models. (Medium)

So if you only cared about quality and ignored privacy and caps, Claude Pro would be excellent.

Privacy and data use

  • Consumer terms update (Aug 2025):

    • If you let them use your data for training, they can retain chats for up to 5 years. (anthropic.com)
    • If you disable training in settings, retention is 30 days. (anthropic.com)
  • All of this is on US-based infrastructure, with the usual legal exposure.

This is similar in spirit to ChatGPT and Gemini, just with more explicit toggles.

Fit for you

  • It fixes the “very short” free window by giving you a bigger bucket, but
  • You still get hard caps (5-hour and weekly) and must accept US storage and retention.
  • If your privacy stance is “no ChatGPT, no Gemini,” Claude Pro is only acceptable if you consciously decide that this level of retention and jurisdiction is acceptable.

4.2 Proton Lumo Plus: privacy-first, “all-day” usage

What Lumo is

  • A privacy-first assistant from Proton (Proton Mail, ProtonVPN, etc.).
  • Uses open-source models orchestrated by Proton, running only on Proton’s European infrastructure. (TechRadar)

Pricing and limits

  • Free version: basic usage with limits, suitable for light testing. (eduearnhub.com)

  • Lumo Plus: about $12.99 / month (or €12.99, with ~€9.99/month if paid yearly). (galaxus.at)

    • Includes unlimited chats under fair-use, extended encrypted history, and multiple large file uploads. (eduearnhub.com)

For your use, the important part is: no fixed “45 messages per 5-hours” style wall.

Privacy model

Proton emphasizes:

  • No logs and no tracking: they claim they do not keep server-side logs of your questions and answers. (Proton)
  • Zero-access encrypted histories: your saved chat history is encrypted so that Proton cannot read it; only your devices hold the keys. (Proton)
  • No training on your chats: user prompts are not used to train the models. (Proton)
  • Hosted fully in Europe, under GDPR, without US or Chinese AI partners. (TinkeringProd)

This is very close to the strongest realistic privacy guarantee you can get without self-hosting.

Coding ability

  • Lumo exposes generic capabilities: summarization, planning, coding help, file analysis. (TechRadar)
  • The underlying models are strong modern open LLMs but are not beating Claude Sonnet or M2 on the toughest coding benchmarks.

In practice:

  • For common programming tasks, refactors, and debugging, Lumo should be “good enough.”
  • For very complex multi-file repairs or benchmark-level tasks, M2 or Claude still has an edge.

Fit for you

  • It solves your usage window problem by effectively giving you “all-day” chat.
  • It aligns with your privacy requirements far better than Claude, Gemini, ChatGPT.
  • The only real trade-off is a small loss in raw coding power compared to the very top proprietary models.

For someone who strongly values privacy and long coding conversations, this is a very good main choice.

4.3 Mistral Le Chat Pro: EU assistant with strong privacy balance

What Le Chat is

  • A French assistant based on Mistral models, with chat web UI and multiple plans. (DataCamp)

Pricing and limits

  • Free: roughly 20–25 messages per day, plus basic features. (Wise)
  • Pro: around $14.99 / month, marketed as “unlimited chats” with fair use, up to free usage. (DataCamp)

So a normal user can use Le Chat Pro for long daily sessions without stress about caps.

Privacy model

Recent analyses and Mistral’s own statements say:

  • Conversations are not used for model training by default. (Data Studios ‧Exafin)
  • Memory is opt-in: the assistant only retains long-term info if you explicitly enable it. (Data Studios ‧Exafin)
  • There is an incognito mode to avoid storing history at all. (reworked.co)
  • A 2025 study rated Le Chat as the least privacy-invasive among popular AI chatbots, noting that it collects limited personal data and restricts how prompts are shared with providers. (euronews)

So Le Chat is a serious privacy-conscious option, though it is not as strict as Lumo’s zero-logs and zero-access encryption.

Coding ability

  • Mistral’s models are highly competitive on standard reasoning and coding benchmarks, and they also have dedicated coding models (e.g. Codestral). (AFP)
  • For your purposes, Le Chat Pro is strong enough for day-to-day coding support and often comparable in feel to big US models, especially on mainstream languages.

Fit for you

  • Good balance: more mainstream model ecosystem than Lumo, strong privacy reputation, generous usage.
  • A logical choice if you want a “big” assistant but want to stay away from US big tech.

4.4 MiniMax M2: high-power coding engine, not a consumer chat app

What M2 is

  • A large Mixture-of-Experts (MoE) open-weight model from MiniMax, with 230B total parameters and about 10B active per token, optimized for coding and agent workflows. (ModelScope)
  • Open weights under a modified MIT-style license, so you can self-host or fine-tune. (AI Business Research Institute.)

Coding benchmarks

  • On SWE-Bench Verified (real GitHub issues, multi-turn tool use), M2 scores about 69.4% solved. (Medium)
  • The same article puts Claude Sonnet 4.5 at around 77.2% on that benchmark, so M2 is a bit weaker but still clearly top tier among all models, beating many other open models. (Medium)

This is very strong for serious coding tasks.

Pricing and limits

  • Direct MiniMax API: about $0.30 / 1M input tokens and $1.20 / 1M output tokens. (LLM Stats)
  • Some gateways (e.g. OpenRouter, Apidog) offer variants at $0.15–0.25 input and $0.60–1.02 output per million tokens. (OpenRouter)
  • Context window is hundreds of thousands of tokens (OpenRouter lists 262k), and some marketing claims multi-million context depending on configuration. (OpenRouter)

For a personal user, this is extremely cheap: even heavy use often costs only a few dollars per month.

Privacy model

  • If you self-host the open weights on your own cloud or machine, you control all data.
  • If you use the MiniMax API or third-party gateways, your prompts and code are processed by MiniMax (Shanghai-based) and the gateway provider. Policies vary and are not as tightly EU-regulated as Proton.

Given you avoid ChatGPT/Gemini for privacy, you would need to decide whether sending code to a Chinese cloud provider is acceptable. From a privacy-law perspective, this is not clearly better than a US provider.

Fit for you

Use M2 as:

  • A specialised coding tool for hard problems and big codebases, not your main conversational assistant.

  • Accessed via:

    • A simple CLI or VS Code extension wired to the MiniMax or OpenRouter API, or
    • A minimal web UI that you run yourself.

It gives you excellent coding power at low cost, but you must set up, and you must carefully choose where it runs.


5. Putting this into a concrete plan for you

You want:

  • Long coding chats
  • Strong data protection
  • No Gemini, no ChatGPT
  • No local LLM on your laptop

A pragmatic plan:

5.1 Main assistant: Proton Lumo Plus

Use Lumo Plus as your default coding partner and general assistant:

  • You get effectively all-day usage without thinking about 5-hour windows. (eduearnhub.com)
  • Your chats are not logged, not used for training, and are zero-access encrypted in Proton’s EU infrastructure. (Proton)
  • Coding quality is good enough for writing and refactoring code, explaining errors, and iterating in long conversations.

This directly addresses your main pain point: short coding sessions and privacy worries.

5.2 Heavy coding tool: MiniMax M2

Add M2 as your on-demand, high-power coding engine:

  • When you have a big, messy bug or large repository, call M2 via an API client.
  • Use its large context and strong SWE-Bench performance to generate patches, refactors, or tool-using agents. (Medium)
  • Monitor token usage, but note that prices are low enough that most personal tasks remain cheap.

You can start via an API gateway (simpler) and later, if you want maximal privacy, consider self-hosting the open weights on a remote server.

5.3 Optional: Mistral Le Chat Pro as a second “brain”

If you want:

  • Another strong general assistant,
  • Also non-US and relatively privacy-friendly,

then Le Chat Pro is a useful additional tool:

  • Reasonable price (~$14.99/month) and generous usage. (DataCamp)
  • EU-based, with default no-training, opt-in memory, and incognito mode. (Data Studios ‧Exafin)

You could, for example:

  • Use Lumo for anything that involves sensitive code or personal data.
  • Use Le Chat Pro for more general reasoning, explanations, and experiments where privacy risk is lower.

5.4 Claude Pro: only if you consciously accept the trade-offs

Given your starting position:

  • If you truly do not want Gemini and ChatGPT “for data protection reasons,” you should treat Claude similarly, because:

    • It now trains on chats by default for consumer plans unless you opt out, with retention up to 5 years in that case. (anthropic.com)
    • Even with training disabled, there is still 30-day retention and US jurisdiction. (anthropic.com)

So the cleanest approach is:

  • Either avoid Claude as your primary assistant, or
  • Use it only for non-sensitive tasks, after turning training off, accepting that you still have hard message and weekly caps. (Frank, North.)

6. Practical impact on your day-to-day coding

With the recommended combination:

  • You open Lumo in the browser and keep the same chat running for hours while you:

    • Paste code, ask for improvements, and iterate.
    • Paste error messages and have it suggest fixes.
    • Summarize long chains occasionally to keep context manageable.
  • When Lumo struggles with a particularly complex or large task, you:

    • Move that specific problem into your M2 tool.
    • Let M2 process the repo or large file with its big context and strong coding ability.

You no longer:

  • Hit small message caps after a dozen coding prompts.
  • Worry that your chats are being used as training data by default.
  • Have to run a big LLM on your laptop.

Short bullet summary

  • Claude free is limited by design. Claude Pro raises the cap to about 45 messages per 5 hours but still keeps strict session and weekly limits and uses US infrastructure with up to 5-year retention if you allow training. (Frank, North.)
  • Proton Lumo Plus costs about $12.99 / month, offers unlimited chats, and uses zero-logs, zero-access encryption with no training on your chats, all in EU infrastructure. This is the most aligned with your privacy and “long coding sessions” requirements. (European Alternatives)
  • Mistral Le Chat Pro (~$14.99/month) gives you another EU, privacy-oriented assistant with strong models and generous usage, but with less extreme privacy guarantees than Lumo. (DataCamp)
  • MiniMax M2 is an open-weight, coding-optimised model with very strong performance on SWE-Bench Verified (~69.4% solved) and very low per-token cost, best used as a specialised coding engine via API, not as your main chat UI. (Medium)
  • For your concrete needs, the strongest recommendation is: Lumo Plus as primary assistant, MiniMax M2 as a coding power tool, and at most Le Chat Pro as an additional EU assistant if you want a second “brain.”