NodeBB Assets - Object Storage
-
Sorry I haven't had a chance yet. It's on my to-do list but I haven't had much time to dig into something like this recently.
What CDN service were you using? I'd like to try it with the same one.
-
@phenomlab sorry I haven't had a chance yet. Been busy with travel lately but I'll try to make some time this month.
-
You can achieve this relatively easily with Cloudflare R2 or AWS Cloudfront S3 storage. It would just require modifying the nodeBB build process to copy static assets to the appropriate bucket(s). The routing of requests for static assets to the various buckets would be delegated to cloudflare workers or cloudfront lambda@edge functions. In both cases, the workers are location aware and can route to the nearest replicated bucket. It should also be essentially free, unless you're hosting a huge set of files/media. R2 provides 10 GB/ month of storage for free with no egress charges. As an extra benefit, you would no longer need nginx in the pipeline as only API requests would be inbound to nodeBB.
-
@razibal to me this sounds quite invasive in terms of modifying the build process. I think it makes more sense for this to be handled as a plugin given that Cloudflare for instance isn't a cdn in the traditional sense.
There are a number of cdn providers who are by order of magnitude cheaper than the lowest paid plan on Cloudflare per month. Even using R2 which is technically freemium, there are going to be various limits and restrictions.
-
I guess it depends on your objectives, using a nodeBB plugin from my perspective is less than optimal because it requires a round trip to the origin. I assumed that the primary objective of the exercise is to serve assets from the edge to ensure minimum latency. I've been using cloudflare for some use cases for quite a while and their caching is quite robust (and cost-effective) if you leverage their cache reserve technology. As for the build process, there is no need to modify the core build process. Just a simple post-build step that copies all static assets to the R2/S3 bucket. Then its just a simple worker function to route all asset requests to R2 (after verifying that the asset is not in cache )
async function handleRequest(request) { const url = new URL(request.url); const { pathname, search } = url; if ( pathname.includes('/assets/') ) { ...
-
Here's a simple implementation using Cloudflare R2 storage
Create a new bucket and attach it to your domain as custom domain
r2-static.yourdomain.com
Create a worker with the script
const host = 'nodebb.yourdomain.com'; const bucket = 'r2-static.yourdomain.com'; async function handleRequest(request) { const url = new URL(request.url); const { pathname, search } = url; const bucketUrl = request.url.replace(host, bucket) const response = await fetch(bucketUrl); return response; } addEventListener('fetch', async event => { event.respondWith(handleRequest(event.request)); });;
Add a trigger to this worker that uses the route
yourdomain/assets/*
Modify the
scripts
in yourpackage.json
in the nodebb folder:"scripts": { "start": "node loader.js", "debug": "NODE_ENV=dev DEBUG=* node loader.js", "build": "./nodebb build", "postbuild": "node postbuild.js", ...
Create a postbuild.js file in the nodebb root folder ( change the 'Your Cloudflare Account ID' to your actual account id)
const AWS = require('aws-sdk'); const mime = require('mime-types'); var ep = new AWS.Endpoint('[Your Cloudflare Account ID].r2.cloudflarestorage.com'); const { S3Client } = require('@aws-sdk/client-s3'); const S3SyncClient = require('s3-sync-client'); const client = new S3Client({ region: 'auto', endpoint: ep }); const { sync } = new S3SyncClient({ client: client }); const EventEmitter = require('events'); const { TransferMonitor } = require('s3-sync-client'); const monitor = new TransferMonitor(); monitor.on('progress', (progress) => console.log(progress)); setTimeout(() => monitor.abort(), 300000); async function syncStaticFiles() { await sync('./build/public', 's3://nodebb-static/assets', { monitor, maxConcurrentTransfers: 1000, commandInput: { ACL: 'public-read', ContentType: (syncCommandInput) => mime.lookup(syncCommandInput.Key) || 'text/html' } }); process.exit() } syncStaticFiles()
And that should do it. Every time you perform a build using the command
yarn build
, a nodebb build will be executed and the static assets should get copied to the R2 bucket. The cloudflare workers will ensure that they are served from the bucket.A typical nodeBB installation has asssets of less than 50MB, the free tier of R2 includes 10 GB with no egress charges.
The free tier for cloudflare workers includes 100,000 requests per day. -
@razibal said in NodeBB Assets - Object Storage:
Create a new bucket and attach it to your domain as custom domain r2-static.yourdomain.com
This isn't entirely clear on CF. If I attempt to connect a custom domain, it fails and tells me
DNS record for this domain already exists on zone. (Code: 10056)
-
@phenomlab you are probably using the root domain, you need to specify a subdomain that will get mapped as dns record. For example
r2-static.yourdomain.com
instead ofyourdomain.dom
-
@razibal trying this on my dev install. Worker looks like this
const host = 'sudonix.dev'; const bucket = 'r2-static.sudonix.dev'; async function handleRequest(request) { const url = new URL(request.url); const { pathname, search } = url; const url = request.url.replace(host, bucket) const response = await fetch(url); return response; } addEventListener('fetch', async event => { event.respondWith(handleRequest(event.request)); });;
When attempting to save, I get
Uncaught SyntaxError: Identifier 'url' has already been declared at worker.js:6:14 (Code: 10021)
-
@phenomlab sorry, when I was editing the script to remove my domain, I added the url def twice. Just change the second
const url =
toconst bucketUrl =
and theawait fetch(url)
toawait fetch(bucketUrl)
I'll edit my post -
@razibal Looks like the
npm
components are missing - had to install them. Now I getsudonix.dev@vps:~/nodebb$ node postbuild.js (node:2257496) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023. Please migrate your code to use AWS SDK for JavaScript (v3). For more information, check the migration guide at https://a.co/7PzMCcy (Use `node --trace-warnings ...` to show where the warning was created) /home/sudonix.dev/nodebb/node_modules/@aws-sdk/credential-provider-node/dist-cjs/defaultProvider.js:13 throw new property_provider_1.CredentialsProviderError("Could not load credentials from any providers", false); ^ CredentialsProviderError: Could not load credentials from any providers at /home/sudonix.dev/nodebb/node_modules/@aws-sdk/credential-provider-node/dist-cjs/defaultProvider.js:13:11 at /home/sudonix.dev/nodebb/node_modules/@aws-sdk/property-provider/dist-cjs/chain.js:11:28 at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async coalesceProvider (/home/sudonix.dev/nodebb/node_modules/@aws-sdk/property-provider/dist-cjs/memoize.js:14:24) at async SignatureV4.credentialProvider (/home/sudonix.dev/nodebb/node_modules/@aws-sdk/property-provider/dist-cjs/memoize.js:33:24) at async SignatureV4.signRequest (/home/sudonix.dev/nodebb/node_modules/@aws-sdk/signature-v4/dist-cjs/SignatureV4.js:87:29) at async /home/sudonix.dev/nodebb/node_modules/@aws-sdk/middleware-signing/dist-cjs/middleware.js:16:18 at async /home/sudonix.dev/nodebb/node_modules/@aws-sdk/middleware-retry/dist-cjs/retryMiddleware.js:27:46 at async /home/sudonix.dev/nodebb/node_modules/@aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:7:26 at async S3SyncClient.listBucketObjects (/home/sudonix.dev/nodebb/node_modules/s3-sync-client/lib/commands/list-bucket-objects.js:12:20) { tryNextLink: false, '$metadata': { attempts: 1, totalRetryDelay: 0 } }