- Remove old Confluence variables - Add NEXT_PUBLIC_API_URL for API access 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com>
Data Migration Scripts
This directory contains scripts for migrating filament data from Confluence to DynamoDB.
Prerequisites
- AWS credentials configured (either via AWS CLI or environment variables)
- DynamoDB table created via Terraform
- Confluence API credentials (if migrating from Confluence)
Setup
cd scripts
npm install
Configuration
Create a .env.local file in the project root with:
# AWS Configuration
AWS_REGION=eu-central-1
DYNAMODB_TABLE_NAME=filamenteka-filaments
# Confluence Configuration (optional)
CONFLUENCE_API_URL=https://your-domain.atlassian.net
CONFLUENCE_TOKEN=your-email:your-api-token
CONFLUENCE_PAGE_ID=your-page-id
Usage
Migrate from local data (data.json)
npm run migrate
Clear existing data and migrate
npm run migrate:clear
Manual execution
# Migrate without clearing
node migrate-with-parser.js
# Clear existing data first
node migrate-with-parser.js --clear
What the script does
-
Checks for Confluence credentials
- If found: Fetches data from Confluence page
- If not found: Uses local
public/data.jsonfile
-
Parses the data
- Extracts filament information from HTML table (Confluence)
- Or reads JSON directly (local file)
-
Prepares data for DynamoDB
- Generates unique IDs for each filament
- Adds timestamps (createdAt, updatedAt)
-
Writes to DynamoDB
- Writes in batches of 25 items (DynamoDB limit)
- Shows progress during migration
-
Verifies the migration
- Counts total items in DynamoDB
- Shows a sample item for verification
Troubleshooting
- Table not found: Make sure you've run
terraform applyfirst - Access denied: Check your AWS credentials and permissions
- Confluence errors: Verify your API token and page ID
- Empty migration: Check that the Confluence page has a table with the expected format