indexDB with cloudflare workers for using git in workers

I was wondering if we could run git in workers:

The only thing this library requires is indexedDB. is there any possibility to support this somehow? I tried doing:

const indexedDB = require("idb")

for https://www.npmjs.com/package/idb

but did not work as expected, and I get the following error from isomorphic-git

indexedDB is not defined

While IDBFactory (indexDB) is a part of V8, it is removed by Cloudflare, this is why it’s “undefined”.

Also, keep in mind that such a database would only be usable during the request since workers don’t persist for a long time.

I just want to use it during the request, why was it decided to remove indexedDB though?

setting global.indexedDB = idb makes indexedDB available but the menthods are not the same, so my usecase does not work with cloudflare workers but interesting to know that this is possible.

What’s your use-case for using it only for a single request? I’m curious.

1 Like

I was trying to build a api service which uses git using : https://github.com/isomorphic-git/isomorphic-git

this uses indexedDB so, this is just cloning the repo making modifications and then pushing it. but now that I think cloudlare workers will have some memory limitations which won’t let it clone larger repositories anyway.

So falling back to using github api

Check out GitLab’s requirements relative to GitHub:

https://docs.gitlab.com/ee/install/requirements.html

thanks for the link but i am not looking to selfhost git, just using the git command is what I meant

It’s info regarding using a different database. Unless you use Windows. GitLab doesn’t support it.

I want to chime in just because I’ve honestly thought about doing this to but of course never had time to get around to it. But as the author of isomorphic-git, I’m always curious how it could be used in different environments.

@hrishikeshbman what were the memory limitations you ran into? I’m super curious.

See

Workers have a 128mb memory limit, I don’t think they specifically ran into it though.

And the 128MB limit is not per-request, but during the lifetime of the worker until it get evacuated. So even if you don’t use the full 128MB during the request, subsequent requests might fill it.