📐 数学

Sampling Theorem & Aliasing - 采样定理与混叠

Comprehensive interactive visualization of the Nyquist-Shannon sampling theorem and aliasing phenomenon in signal processing. Features core theorem: f_s >= 2f_max for perfect reconstruction, sampling process: x[n] = x(n/f_s), and aliased frequency: f_alias = |f - k*f_s|. Real-time visualization includes: (1) Main signal canvas showing original continuous signal (blue), sample points (red dots), and reconstructed signal (green dashed) with adjustable sampling rate f_s, signal frequency f, amplitude, and phase. (2) Frequency spectrum display showing original spectrum, spectral replicas from sampling, and aliased components highlighted in orange. (3) Nyquist indicator bar with color-coded status (green for adequate, orange for warning, red for aliasing). (4) Reconstruction methods comparison: Zero-Order Hold (staircase), Linear Interpolation, Sinc Interpolation (ideal Whittaker-Shannon), and Cubic Spline. (5) Real-time metrics: sampling rate, Nyquist rate, max frequency, aliased frequency, and reconstruction MSE. Multiple signal types: sine wave, composite (multi-frequency), square wave (demonstrating harmonics), and custom frequency addition. Preset scenarios: Perfect (f_s = 4f), Critical (f_s = 2f), Aliasing (f_s = 1.5f), Severe (f_s = 1.1f), and Wagon Wheel effect demo. Animation mode shows scanning line moving across signal with real-time sample capture. Educational content covers sampling theorem statement, Nyquist condition, aliasing mechanism with wagon wheel effect analogy, reconstruction method formulas, frequency folding around f_s/2, and real-world applications: CD audio (44.1 kHz), video frame rates (24/30/60 fps), image pixel sampling, and digital communications ADC/DAC. Uses KaTeX for formula rendering including f_s >= 2f_max, x[n] = x(nT_s), f_alias = |f - k*f_s|, and sinc interpolation formula. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Convolution - 卷积可视化

Interactive visualization of convolution demonstrating the flip-slide-multiply-sum process in both time and frequency domains. Features continuous formula: (f*g)(t) = ∫ f(τ)·g(t-τ) dτ, discrete formula: (x*h)[n] = Σ x[k]·h[n-k], and convolution theorem: FFT(x*h) = FFT(x)·FFT(h). Time domain visualization shows: (1) Input signal x[n] (static), (2) Flipped kernel h[-k] showing horizontal mirror, (3) Shifted kernel h[n-k] sliding to current position n, (4) Product view showing x[k]·h[n-k] multiplication at overlapping samples, (5) Output y[n] building up progressively with current position marker. Frequency domain view demonstrates: (1) FFT magnitude of input signal |FFT(x[n])|, (2) FFT magnitude of kernel |FFT(h[n])|, (3) Product of FFTs |FFT(x)·FFT(h)|, (4) FFT of output |FFT(y[n])| showing convolution theorem. Multiple signal types: rectangular pulse, triangular pulse, Gaussian, sinc function, and custom drawable signal. Kernel options: rectangular (moving average), Gaussian (smoothing), derivative (edge detection), sinc (low-pass filter), Sobel (edge detection). Adjustable parameters: kernel size (3-21), normalization toggle, sampling rate/resolution, animation speed, and manual position control. Animation controls: play/pause, step forward/backward, reset, position slider, and speed control. Real-time metrics display: current position n, output value y[n], sum of products, overlapping samples count, and detailed calculation breakdown showing each term. Educational content covers four-step process (flip, slide, multiply, sum), key concepts (commutative, associative, identity element, convolution theorem), and practical applications: image filtering (blur, sharpen, edge detection), audio processing (reverb, EQ), neural network convolution layers, probability distributions, and signal smoothing. Color coding: blue for input signal, red for kernel, green for output, purple for product, orange for overlap. Custom drawing mode allows hand-drawn signals with mouse/touch input. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Gradient Descent vs Newton's Method - 梯度下降 vs 牛顿法

Comprehensive interactive visualization comparing first-order (Gradient Descent, Momentum) and second-order (Newton's Method, Adam) optimization algorithms on 2D objective functions. Features five test functions: Convex Quadratic (x² + 2y²), Rosenbrock ((1-x)² + 100(y-x²)²), Rastrigin (10·2 + (x²-10cos(2πx)) + (y²-10cos(2πy))), Beale, and Himmelblau. Mathematical formulas: Gradient Descent x_{k+1} = x_k - η∇f(x_k), Newton's Method x_{k+1} = x_k - H^(-1)(x_k)∇f(x_k), Momentum v_{k+1} = βv_k - η∇f(x_k), x_{k+1} = x_k + v_{k+1}. Interactive features: function selector, starting point (click on contour or sliders), algorithm comparison mode (side-by-side), hyperparameter controls (learning rate η, momentum β, damping factor, max iterations), animation controls (play/pause, step, reset, speed), contour plot with color-coded elevation map and level curves, real-time metrics (iteration, position, function value, gradient magnitude, step size), convergence graph (log scale f(x) vs iteration), gradient arrows, and step markers. Educational content: algorithm comparison table (convergence speed, memory, computation per iteration, use cases), key concepts (quadratic vs linear convergence, oscillation in narrow valleys, Hessian matrix curvature information, local vs global minima), and practical guidance on when to use each algorithm. Uses KaTeX for formula rendering. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Cross-Entropy Loss Visualization

Comprehensive interactive visualization of cross-entropy loss, the most commonly used loss function for classification tasks in machine learning. Features three main sections: (1) Binary Cross-Entropy - Interactive exploration with adjustable true label (0/1) and predicted probability (0-1), real-time loss calculation using L = -[y·log(ŷ) + (1-y)·log(1-ŷ)], gradient visualization, and dynamic loss curve plotting showing both y=0 and y=1 cases. (2) Categorical Cross-Entropy - Multi-class demonstration with interactive 3-class softmax visualization, adjustable logits inputs with real-time probability distribution updates, true class selection, and softmax formula σ(z)_i = e^(z_i) / Σ(e^(z_j)). (3) Loss Function Comparison - Side-by-side comparison of cross-entropy vs Mean Squared Error (MSE), interactive loss curves comparison (y=1 scenario), pros/cons table analyzing gradient behavior, convexity, probabilistic interpretation, and best use cases. Educational content covers information theory perspective (KL divergence relation: H(p,q) = D_KL(p||q) + H(p)), why MSE is unsuitable for classification (weak gradient signals), numerical stability tips (log-sum-exp trick), label smoothing, class imbalance handling, and activation function selection (sigmoid for binary, softmax for multi-class). Uses Chart.js for interactive plots and KaTeX for formula rendering. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Newton Fractal - Basin of Attraction Visualization

Interactive Newton fractal visualization demonstrating Newton's method for finding polynomial roots on the complex plane. Features the iteration formula z_{n+1} = z_n - f(z_n)/f'(z_n) applied to complex polynomials. Each pixel represents a starting point, colored by which root it converges to (basin of attraction) and how quickly (brightness indicates convergence speed). Six polynomial presets: z^3-1 (3 cube roots of unity), z^4-1 (4 fourth roots), z^5-1 (5 fifth roots), z^6-1 (6 sixth roots), z^4+1 (diagonal roots), and z^3-2z+2 (non-symmetric roots). Adjustable parameters: max iterations (10-200), convergence tolerance (0.00001-0.01), five color schemes (rainbow, pastel, neon, earth tones, cool colors), quality presets (low/medium/high). Interactive features: zoom with mouse wheel, pan by dragging, click roots to highlight their basins, animation mode cycling through basins, and toggle root markers. Real-time cursor position display in complex coordinates. Educational content covers Newton's method history (Newton 1669, Raphson 1690), mathematical principle of iterative root finding, fractal boundary explanation via sensitive dependence on initial conditions, and applications in numerical analysis, complex dynamics, art, education, and physics. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Backpropagation Deep Dive - Chain Rule to Engineering Practice

Interactive deep-dive visualization of backpropagation: historical context, chain-rule derivation, backward error flow, gradient stability experiments, algorithm workflow, and practical training engineering checklist.

📐 数学

Feedforward Neural Network / MLP - The Foundation of Deep Learning

Comprehensive visualization of feedforward neural networks and multi-layer perceptrons. Seven interactive modules: (1) Network Structure - Visualize data flowing through configurable input, hidden, and output layers with animated forward pass. (2) Layer Transformation - Interactive demo showing linear transformation plus nonlinear activation, demonstrating why nonlinearity is essential. (3) Activation Function Gallery - Compare Sigmoid, Tanh, ReLU, Leaky ReLU, and GELU with their formulas, ranges, gradients, and pros/cons. (4) Universal Approximation Theorem - Visual proof that MLP can approximate any continuous function with adjustable neurons and layers. (5) Backpropagation Animation - Watch gradients flow backward through the network with chain rule visualization. (6) Transformer MLP Block - Understanding why every Transformer contains an MLP/FFN with block and flow views. (7) Practical Guide - Network design recommendations, initialization methods (He, Xavier), regularization techniques, real-world applications, historical timeline from Rosenblatt to Transformers, and MLP limitations. Covers the key insight: Attention handles global token interaction while MLP handles per-token feature refinement. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Perceptron/Neuron - The Fundamental Unit of Deep Learning

Comprehensive interactive visualization of the perceptron, the atomic structure of neural networks. Six modules: (1) Perceptron Basic Form - Interactive demo with adjustable inputs, weights, bias, and activation function showing the computation y = f(Σw_i*x_i + b). (2) Why Activation Functions - Visual proof that linear composition remains linear, demonstrating the three core purposes: introduce nonlinearity, control numerical range, provide differentiability. (3) Activation Function Gallery - Interactive plots of Step, Sigmoid, Tanh, ReLU, Leaky ReLU, Swish, and GELU with formulas, derivatives, pros/cons, and real-time calculator. (4) Gradient Flow Visualization - Shows gradient propagation through different activations with stability comparison table highlighting vanishing gradient problem. (5) Deep Networks Demo - Compare linear-only vs nonlinear networks with adjustable layer count, demonstrating the Universal Approximation Theorem. (6) Practical Guide - Best practices for hidden layers (ReLU default, GELU/Swish for Transformers), output layers (Sigmoid for binary, Softmax for multi-class, Linear for regression), initialization matching (He for ReLU, Xavier for Tanh/Sigmoid), and combination techniques (BatchNorm, Residual Connections). Covers historical evolution from Rosenblatt's 1958 perceptron through the ReLU revolution to modern GELU in Transformers. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

New Economics of Industrial Policy - Juhasz, Lane, Rodrik

Comprehensive interactive visualization of the New Economics of Industrial Policy framework by Juhasz, Lane & Rodrik (2023-2024). Features the core triad framework showing interactions between policy rationale (market failures), political feasibility (state capacity + incentives), and policy implementation (governance structures). Theory evolution comparison table contrasting traditional market-failure approaches with the new empirical, political-economy-informed framework. Core research topics including evidence-first methods (DiD, synthetic control, quasi-experiments), political economy constraints (interest concentration, capture risks), governance tools (sunset clauses, conditional support), and state capacity dimensions. Global case studies: South Korea HCI drive (1960s-70s), Italy southern policy, US CHIPS Act (2022-2025), EU Green Deal, China comprehensive policy, Colombia/Brazil export support. Six-step policy evaluation process flowchart with feedback loops. Interactive policy design simulator adjusting market failure type, state capacity level, and political incentives to generate recommendations and risk assessments. Challenges radar chart visualizing political complexity, government failure, developing country constraints, spillovers, trade retaliation, and measurement issues. Research methods toolbox covering econometric methods, causal inference, text analysis/LLM tools, and political quantification. Based on NBER Working Paper 31538 and Annual Review of Economics 16:213-242. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Doughnut Economics - A 21st Century Framework for Sustainable Development

Interactive visualization of Kate Raworth's Doughnut Economics framework. Features the core doughnut model with 12 social foundations (food, water, health, education, energy, housing, income, gender equality, social equity, political voice, peace & justice, network access) and 9 ecological ceilings (climate change, biodiversity loss, land change, freshwater, nitrogen cycle, ocean acidification, chemical pollution, aerosols, ozone depletion). Seven paradigm shifts from traditional to 21st-century economics: change the goal, see the big picture, nurture human nature, think in systems, design to distribute, create to regenerate, be agnostic about growth. Mathematical formulation as constrained optimization: max wellbeing subject to S_i >= S_min and E_j <= E_max. Policy simulator with adjustable social investment, carbon tax, circular economy adoption, and inequality reduction. Real-world cases including Amsterdam city strategy, UK local authorities, Global South applications, and corporate implementations. Comparison table of traditional vs doughnut economics across goals, growth, resources, distribution, and policy. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

Modern Monetary Theory (MMT) - Theoretical Foundations, Analytical Framework, and Policy Implications

Comprehensive interactive visualization of Modern Monetary Theory (MMT) core concepts, sectoral balances identity, functional finance, and job guarantee mechanisms. Features interactive sectoral balances chart (S-I + M-X + G-T = 0) with real-time balance adjustments, government spending flow animation showing ΔNFA_private = G - T, consolidated government perspective (Treasury + Central Bank), functional finance principles (real resource constraints, inflation control, full employment priority, automatic stabilizers), job guarantee buffer stock visualization with business cycle simulation (recession, recovery, expansion, boom), monetary policy role comparison, real-world case studies (COVID-19 response, Japan, Eurozone), and theory evaluation. Multi-language support (zh, en, es, fr, de, ru, pt).

📐 数学

New Structural Economics - Justin Yifu Lin's Third Wave of Development Economics

Comprehensive interactive visualization of Justin Yifu Lin's New Structural Economics (NSE), the third wave of development economics. Features three waves evolution table, NSE core framework with extended Cobb-Douglas production function Y = A·K^α·L^β·S^γ (where S is structural variable), factor endowments visualization with K/L ratio analysis, causal chain animation from factor endowments to economic growth, growth decomposition with TFP contribution analysis, six functions of a facilitative state (hard/soft infrastructure, industry discovery, cluster development, risk sharing, externalities), GIFF framework six-step methodology, country case studies (China, Vietnam, Ethiopia), and theory comparison with neoliberalism and structuralism. Multi-language support (zh, en, es, fr, de, ru, pt).