Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adjust Attention Mechanism and Dataset Handling #295

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

OLMResearch
Copy link

Summary

This pull request introduces several updates to the attention mechanism and dataset handling. The main changes include adjustments to the attention mask logic, enhancements in data preprocessing, and integration of JSONL dataset support.

Changes

Attention Mechanism

  • Attention Mask Adjustment: Modified the condition in attention.py to check if the shape of queries is equal to 1 instead of only the second dimension, improving the handling of causal and non-causal attention modes.

Dataset Handling

  • Data Concatenation: Integrated concatenate_datasets from the datasets library to support combining multiple datasets efficiently.
  • JSONL Dataset Support: Introduced a new JSONLDataset class to handle datasets in JSONL format, including methods for loading, padding, and chunking data. This class supports flexible handling of input sequences and ensures compatibility with the training pipeline.

Data Loader

  • Custom DataLoader: Created a get_jsonl_dataloader function to generate data loaders for JSONL datasets, handling training and validation datasets separately and ensuring proper batching and shuffling mechanisms.
  • Tokenizer Handling: Added a tokenizer function to handle EleutherAI's GPT-NeoX tokenization, ensuring consistent encoding and padding across the pipeline.
  • Logging Enhancements: Improved logging for better traceability and debugging, especially during the evaluation and training steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants