Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add README.md #4

Open
wants to merge 32 commits into
base: main
Choose a base branch
from
Open

add README.md #4

wants to merge 32 commits into from

Conversation

tscholak
Copy link
Collaborator

@tscholak tscholak commented Oct 16, 2024

  • add README.md
  • add/update examples:
    • Slurm
    • Kubernetes
  • add docker build and push workflow

@tscholak tscholak marked this pull request as ready for review October 18, 2024 13:45
.github/workflows/run-tests.yaml Outdated Show resolved Hide resolved
vocab_size: 32000
tie_word_embeddings: false
multi_stage:
zero_stage: 3
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Stage 2 should be enough

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have only benchmarked with stage 3 so far. I'd like to keep it that way, since this is also our internal benchmark.

README.md Outdated Show resolved Hide resolved
- 📝 Simple YAML configuration for hassle-free setup.
- 💻 Command-line interface for easy launches.
- 📊 Detailed logging and real-time monitoring features.
- 📚 Extensive documentation and practical tutorials.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aspirationally, yes

3. 🎨 **Fast-LLM is Incredibly Flexible**:
- 🤖 Compatible with all common language model architectures in a unified class.
- ⚡ Efficient dropless Mixture-of-Experts (MoE) support.
- 🧩 Customizable for language model architectures, data loaders, loss functions, and optimizers.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not yet...

Copy link
Collaborator Author

@tscholak tscholak Oct 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The custom model template qualifies, doesn't it?

README.md Outdated Show resolved Hide resolved
- name: Install dependencies
run: |
pip install -e .
pip install "torch>=2.2.2" "numpy>=1.24.4,<2.0.0" "safetensors>=0.4.4" "transformers>=4.44.2" "pytest>=8.3.2" "pytest-depends>=1.0.1"
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jlamypoirier, would be better if there was a a target for this in setup.cfg, wdyt?

@tscholak tscholak changed the title add readme add README.md Oct 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants