MartinA few months ago I started a side project mostly to get hands-on experience with three things I...
A few months ago I started a side project mostly to get hands-on experience with three things I hadn't used seriously before: Docker, SQLite, and Python CLI tools. The plan was to build something small for educational purposes. But the project turned into DataTracker — a local version control system for data files.
This article is about the architecture, specifically the part that surprised me most once I started designing how to actually store versioned files, I kept arriving at the same solutions git already uses. Not because I copied them, but because they're probably the best answers to the problem.
The use case is simple. You have a CSV, a set of images, or any data file really. You run some processing, the file changes. Later you want to know what it looked like before, compare the two versions, etc. You want this without manually copying files into data_v1/, data_v2/, data_final/, data_final_REAL/.
Git solves this for source code. It does not solve it well for binary files or large datasets, and it was never designed to. So the question becomes: what does a minimal version of git look like if you build it specifically for data files?
The most important design decision, and the one that everything else follows from, is how you store files.
The naive approach is to copy the file into some storage directory and name it after the dataset and version: sales-data-v1.csv, sales-data-v2.csv, and so on. This works until two versions of a dataset happen to contain identical data — then you've stored the same bytes twice for no reason.
Git's answer to this is content-addressed storage: don't name files after what they are, name them after what they contain. Specifically, hash the file contents and use the hash as the filename.
DataTracker does exactly this. When you run dt add ./sales.csv, this happens:
# file_utils.py
def hash_file(file_path: str) -> str:
sha256_hash = hashlib.sha256()
with open(file_path, "rb") as f:
for chunk in iter(lambda: f.read(8192), b""):
sha256_hash.update(chunk)
return sha256_hash.hexdigest()
def copy_file_to_objects(tracker_path: str, data_path: str, file_hash: str) -> None:
save_path = os.path.join(tracker_path, "objects", file_hash)
shutil.copy2(data_path, save_path)
The file ends up stored as .data_tracker/objects/a3f8c2... — a 64-character hex string. The original filename is recorded separately in the database. The storage layer has no concept of names at all.
Git calls these "blob objects". The structure is identical:
git: .git/objects/a3/f8c2... ← raw file contents
DataTracker: .data_tracker/objects/a3f8c2... ← raw file contents
The immediate practical benefit is automatic deduplication. If you add two datasets with identical contents, or update a dataset without actually changing the file, the hash is the same, the object file already exists, and no second copy is written. This is handled by a single database check before the copy:
# db_manager.py — INSERT OR IGNORE means a collision is silently a no-op
conn.execute("INSERT OR IGNORE INTO objects (hash, size) VALUES (?, ?)", (file_hash, size))
No other deduplication logic is needed.
One important thing to note is that duplicates are allowed in DataTracker — you can add new versions with the same contents if you want, DataTracker will only provide a warning.
Storing files by hash solves the storage problem, but it creates a new one: how do you know which hash belongs to which dataset version? Git uses a combination of tree objects and refs (branches, tags) stored as small files in .git/. I used SQLite, which gives you the same thing with foreign keys and transactions.
The database schema has four tables:
datasets — one row per tracked dataset (id, name, message, created_at)
│
└── versions — one row per version (dataset_id → datasets, object_hash, version number, original_path)
│
└── files — one row per file in a version (version_id → versions, object_hash → objects, relative_path)
│
objects — one row per unique file stored (hash, size) ←──────────────┘
The files table is the key one. It's what lets DataTracker reconstruct a full directory version: given a version_id, you get back every file's hash and its relative path within the original directory. That's enough to recreate the original structure anywhere.
datasets and versions are roughly equivalent to git's refs and commits. objects is the object store manifest — it tracks sizes and prevents orphaned files, but the actual content lives in the filesystem.
Why SQLite over a JSON file? Three reasons:
conn.execute("PRAGMA foreign_keys = ON").Git's unit of storage is always a single file (a blob). Directory structure is captured separately as tree objects that reference blobs. DataTracker takes a slightly different approach because the use case is different.
When you add a directory, DataTracker stores each file individually in objects/ (same as git), but it also computes a single primary hash for the whole directory. This primary hash is used for one specific purpose: duplicate detection at the version level, so you can warn users if needed.
# file_utils.py
def hash_directory(dir_path: str) -> str:
sha256_hash = hashlib.sha256()
for root, dirs, files in os.walk(dir_path):
dirs.sort()
files.sort()
for filename in files:
filepath = os.path.join(root, filename)
rel_path = os.path.relpath(filepath, dir_path)
sha256_hash.update(rel_path.encode('utf-8')) # include structure
with open(filepath, 'rb') as f:
for chunk in iter(lambda: f.read(8192), b""):
sha256_hash.update(chunk)
return sha256_hash.hexdigest()
The directory hash covers both file contents and relative paths — so renaming a file in the directory produces a different hash even if the contents are unchanged.
So the two-hash approach is:
versions.object_hash, used to warn about duplicate versionsobjects and files, used for actual storage and retrievalGit doesn't need this distinction because it always works at the file level. DataTracker needs it because the primary user-facing unit is a dataset version, not a file.
Here's what happens end to end when you run dt add ./data/ --title "experiment-1":
1. Walk the directory, collect all file paths
2. For each file:
a. SHA-256 hash the contents
b. Copy to .data_tracker/objects/<hash> (INSERT OR IGNORE — dedup is free)
c. Record (version_id, hash, relative_path) in the files table
3. SHA-256 hash the entire directory → primary_hash
4. Check if primary_hash already exists in versions → warn if so
5. Insert a row into datasets (or reuse existing for dt update)
6. Insert a row into versions (dataset_id, primary_hash, version number, original_path)
7. conn.commit() — all of the above is one transaction
Step 7 is important. If anything fails between steps 1 and 6, the commit never happens and the database sees none of it. The object files that were already copied to disk are then cleaned up explicitly. This is the same reason git's staging area exists — operations on the object store and operations on the index need to be kept consistent.
Pointing out the gaps is more useful than pretending the tool is complete.
No delta storage. Git stores diffs for text files rather than full copies, which is why a git repository with hundreds of commits doesn't grow linearly with the number of commits. DataTracker stores a full copy of every file in every version. For small datasets this is fine. For large ones it becomes expensive quickly.
No garbage collection. Git has git gc. DataTracker cleans up orphaned objects when you remove a dataset or version, but there's no general-purpose GC pass. If something goes wrong mid-operation and leaves orphaned objects, they stay there until you notice the dt storage numbers look wrong.
Linear history only. Git has branching. DataTracker has a single version number per dataset, incrementing linearly. There's no concept of parallel versions or merging.
No remote. Everything is local. There's no push/pull, no sharing between machines.
Some of these are limitations by design — the tool is meant to be simple. Some of them (delta storage in particular) are things I would perhaps like to add in the future, but they require a lot of new logic and complexity, which I didn't want to deal with initially.
I started this project thinking Docker and SQL would be the only interesting parts. The Docker integration (the dt transform command, which runs a transformation inside a container and auto-versions the output) did turn out to be very interesting. But the part that surprised me the most was how much of git's core architecture I ended up reimplementing without even trying — and thinking about it turned out to be a great way to understand git itself better.
Content-addressed storage is a 30-year-old idea. It shows up in git, in IPFS, in container image layers, in package managers. The reason it keeps appearing is that it solves a hard problem — identity and deduplication — with almost no code. The hash is the identity. Two files that are identical are automatically the same object. You don't have to write that logic; the data model expresses it.
A lot of developers use git every day without thinking much about how it actually works under the hood. I think building something similar, even something much simpler, is one of the better ways to change that.
# Install
git clone https://github.com/martin-iflap/DataTracker.git
cd DataTracker
pip install -e .
# Track a dataset
dt init
dt add ./data.csv --title "sales" -m "Raw export"
# Update it
dt update ./data_cleaned.csv --name sales -m "Removed nulls"
# See what changed
dt compare 1.0 2.0 --name sales
# Go back to v1
dt export ./recovered --name sales -v 1.0
Output from dt compare:
Comparison between version 1.0 and version 2.0:
...
Modified files:
~ data.csv | Size: 48.20 KB → 45.10 KB = -3.10 KB
Similarity: 94.30%
Lines added: 0, Lines removed: 47
The full source is at github.com/martin-iflap/DataTracker. The project is still actively developed — the transform/presets system is not finished, there's no GC command, and no status command yet. But I plan to add them in the coming weeks.
If you've built something similar, ran into one of the same problems, or have a strong opinion about delta storage — I'd genuinely like to hear it in the comments.