New Step 3.5 Flash MOE Model Rivals ChatGPT 5.2 with 11B Active
Built on a sparse Mixture of Experts (MoE) architecture, it selectively activates only 11B of its 196B parameters per token. This "intelligence density" allows it to rival the reasoning depth of top-tier proprietary models, while maintaining the agility required for real-time int