Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
num_tasks: int64
total_grids_original: int64
total_augmented_grids: int64
augmentations_per_grid: int64
vs
_data_files: list<item: struct<filename: string>>
_fingerprint: string
_format_columns: null
_format_kwargs: struct<>
_format_type: null
_output_all_columns: bool
_split: null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 563, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              num_tasks: int64
              total_grids_original: int64
              total_augmented_grids: int64
              augmentations_per_grid: int64
              vs
              _data_files: list<item: struct<filename: string>>
              _fingerprint: string
              _format_columns: null
              _format_kwargs: struct<>
              _format_type: null
              _output_all_columns: bool
              _split: null

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

JDWebProgrammer/arg-agi-augmented

Dataset Description

Overview

This dataset is an augmented version of grids extracted from the ARC-AGI dataset (Abstraction and Reasoning Corpus). It focuses on individual grids rather than full tasks or games, providing an expanded collection for pretraining and testing models like autoencoders (AEs) or latent-space reasoners.

  • Source: Derived from the training split of ARC-AGI (all demonstration and test grids).
  • Augmentations: Each original grid is expanded with 5 transformations (horizontal flip, vertical flip, 90°/180°/270° rotations), resulting in 6 variants per grid (original + 5 augments).
  • Key Note: This is not the full games/tasks from ARC-AGI. It contains only the raw, augmented grids (as 2D lists of integers 0-10) for standalone use in perceptual pretraining or reconstruction testing. Use the original ARC-AGI for full few-shot reasoning tasks.

Dataset Structure

  • Format: Hugging Face Dataset object.
  • Splits: Single split (train) with one field:
    • augmented_grids: List of 2D lists (grids). Each grid is [[int, ...], ...] (H x W, values 0-10).
  • Size: ~48,000 grids (from ~400 ARC training tasks × ~4 grids/task × 6 augments).
  • Metadata: See metadata.json for stats (original grids, augmentation factor).

Example grid entry:

augmented_grids[0] = [[0, 1, 0], [1, 0, 1], [0, 1, 0]]  # Example 3x3 grid

Usage

Load and use for pretraining:

from datasets import load_dataset
ds = load_dataset("JDWebProgrammer/arc-agi-augmented")
grids = ds['augmented_grids']  # List of all grids

Ideal for:

  • Pretraining perceptual models.
  • Testing reconstruction accuracy (compare original vs. augmented).
  • Data augmentation for fluid intelligence tasks (e.g., ARC-like pattern inference).

Generation

  • Extracted all input/output grids from ARC-AGI training split demos/tests.
  • Applied deterministic augmentations (flips/rotations) to expand variety without labels.
  • No synthetic generation — pure augmentation of real ARC data.

Limitations

  • Grids only (no task structure/context) — not for end-to-end ARC solving.
  • Augmentations preserve structure but may introduce artifacts (e.g., rotations on asymmetric grids).
  • Values 0-10 (ARC standard); normalize for models.

License

  • Based on ARC-AGI (CC BY-SA 4.0) — inherits same license.
  • Augmentations: MIT (free for research/commercial).

Citation

@misc{dataartist/arc-agi,
  title = {ARC-AGI },
  author = {dataartist},
  year = {2025},
  url = {https://huggingface.co/datasets/dataartist/arc-agi}
}

Generated for pretraining perceptual models on ARC-style puzzles. Not a substitute for full ARC tasks.

Downloads last month
18