Spaces:
Sleeping
Sleeping
LexiMind Architecture
Overview
LexiMind couples a from-scratch Transformer implementation with a modern data and inference stack. The project consists of three major layers:
- Data & Preprocessing β lightweight text cleaning built on top of scikit-learn primitives and a Hugging Face tokenizer wrapper with deterministic batching helpers.
- Model Composition β the bespoke encoder/decoder stack with task heads assembled via
MultiTaskModel, plusmodels.factory.build_multitask_modelto rebuild the network from configuration files. - Inference & Serving β a multi-task pipeline capable of summarization, emotion, and topic classification; surfaced through a CLI and FastAPI service with a Gradio UI.
Custom Transformer Stack
The custom Transformer is designed with modern architectural choices while maintaining compatibility with pre-trained weights from Google's FLAN-T5.
Architecture Highlights
- Pre-Layer Normalization (Pre-LN): RMSNorm applied before each sublayer for stable training
- RMSNorm: More efficient than LayerNorm (no mean computation, no bias parameters)
- FlashAttention: Via PyTorch 2.0's
F.scaled_dot_product_attentionfor O(N) memory - Learned Positional Embeddings: Trainable position representations (randomly initialized)
- Multi-Head Attention: 12 heads with optional LoRA adapters and RoPE support
Weight Loading from FLAN-T5
The factory.py module loads weights from FLAN-T5-base, which uses a compatible Pre-LN architecture:
- Token embeddings: Shared between encoder and decoder
- Attention projections: Q, K, V, O weights (bias initialized to zero since T5 has no attention bias)
- FFN weights:
wi_1βlinear1,woβlinear2(T5 uses gated FFN; we use the up/down projections) - RMSNorm weights: Direct transfer (both use RMSNorm without bias)
- LM head: Loaded from T5's
lm_head
Note: T5 uses relative position bias computed in attention, not absolute embeddings. Our learned positional embeddings are randomly initialized and train quickly during fine-tuning.
File Structure
src/models/encoder.pyβ TransformerEncoder with Pre-LN RMSNorm blockssrc/models/decoder.pyβ TransformerDecoder with KV-cache for efficient generationsrc/models/attention.pyβ Multi-Head Attention with FlashAttention, LoRA, and RoPE supportsrc/models/heads.pyβ ClassificationHead (mean pooling) and LMHead (with weight tying)src/models/multitask.pyβ Routes inputs to task-specific headssrc/models/factory.pyβ Builds models and loads FLAN-T5 weights
Data, Tokenization, and Preprocessing
src/data/tokenization.pywrapsAutoTokenizer(configured for FLAN-T5) to provide tensor-aware batching and helper utilities for decoder input shifting.src/data/preprocessing.pyintroducesTextPreprocessor, layering aBasicTextCleanerwith optional scikit-learn transformers.src/data/dataset.pyandsrc/data/dataloader.pydefine strongly typed dataset containers and collators.
T5 Tokenizer Differences
- Vocab size: 32,128 tokens (SentencePiece)
- Special tokens: pad=0, eos=1 (no explicit BOS; decoder starts with pad token)
- Subword tokenization: Unigram-based (vs BART's BPE)
Training Pipeline
src/training/trainer.pycoordinates multi-task optimization with:- Mixed precision training (bfloat16 on Ampere/Ada GPUs)
- Gradient accumulation for larger effective batch sizes
- Per-task loss weighting and label smoothing
- torch.compile: JIT compilation with Inductor backend for 20-40% speedup
- Metrics in
src/training/metrics.pyinclude accuracy, multi-label F1, and ROUGE-like overlap
Inference & Serving
src/inference/pipeline.pyexposes summarization, emotion, and topic predictions with shared pre-processing, generation, and thresholding logic.src/inference/factory.pyrebuilds the full pipeline using the exported tokenizer artifact- The CLI (
scripts/inference.py) drives the pipeline from the command line - Gradio demo (
scripts/demo_gradio.py) provides a web interface
Key Decisions
- Custom Transformer + Pre-trained Weights: Building from scratch demonstrates deep understanding while leveraging FLAN-T5's language knowledge
- Pre-LN RMSNorm: Modern architecture used by LLaMA, T5 v1.1, and other 2023-2025 models
- Tokenizer Artifact Preference: Inference favors
artifacts/hf_tokenizerfor reproducibility - Sklearn-friendly Preprocessing: Optional
TransformerMixininjection for custom cleaning