Skip to content

Outlines v0.1.0

Latest
Compare
Choose a tag to compare
@rlouf rlouf released this 07 Oct 14:01
· 4 commits to main since this release

⚡ Performance Improvements

  • Outlines Core: Enjoy faster FSM index construction with a new implementation (#1175).
  • 98% Reduction in Runtime Overhead: Reduced overhead by storing FSM-token-mask as tensors. (#1013)

🚀 New Features

💡 Enhancements

  • Unified Logits Processors: All models now use shared outlines.processors, completed by adding the following to the integration: llama-cpp, vLLM and ExLlamaV2).
  • Custom Regex Parsers: Simplify the implementation of custom Guide classes with Regex Parser support (#1039).
  • Qwen-style Byte Tokenizer Support: Now compatible with Qwen-style byte tokenizers (#1153).

🐛 Bug Fixes

  • CFG Beta: Fixed large number of bugs to enable beta version grammar-based generation using Lark (#1067)
  • Fixed incorrect argument order breaking some models in models.transformers_vision (#1077).
  • Resolved OpenAI fallback tokenizer issue (#1046).
  • Option to disable tqdm bars during inference with vLLM (#1004).
  • models.llamacpp no longer includes implicit max_tokens (#996).
  • Fixed whitespace handling for models.mlxlm (#1003).
  • models.mamba now working, and supporting structured generation (#1040).
  • Resolved pad_token_id reset issue in TransformerTokenizer (#1068).
  • Fixed outlines.generate generator reuse causing runtime errors (#1160).

⚠️ Breaking Changes

  • outlines.integrations is now deprecated: #1061

Full Changeset

New Contributors

Full Changelog: 0.0.46...0.1.0