Clean up unnecessary migration files and outdated documentation

This commit is contained in:
DaX
2025-07-21 12:10:45 +02:00
parent 9f01158241
commit 4020bb4ab8
9 changed files with 19 additions and 664 deletions

View File

@@ -1,63 +0,0 @@
# Deployment Guide for Filamenteka
## Important: API Update Required
The production API at api.filamenteka.rs needs to be updated with the latest code changes.
### Changes that need to be deployed:
1. **Database Schema Changes**:
- Column renamed from `vakum` to `spulna`
- Column `otvoreno` has been removed
- Data types changed from strings to integers for `refill` and `spulna`
- Added CHECK constraint: `kolicina = refill + spulna`
2. **API Server Changes**:
- Updated `/api/filaments` endpoints to use new column names
- Updated data type handling (integers instead of strings)
- Added proper quantity calculation
### Deployment Steps:
1. **Update the API server code**:
```bash
# On the production server
cd /path/to/api
git pull origin main
npm install
```
2. **Run database migrations**:
```bash
# Run the migration to rename columns
psql $DATABASE_URL < database/migrations/003_rename_vakum_to_spulna.sql
# Run the migration to fix data types
psql $DATABASE_URL < database/migrations/004_fix_inventory_data_types.sql
# Fix any data inconsistencies
psql $DATABASE_URL < database/migrations/fix_quantity_consistency.sql
```
3. **Restart the API server**:
```bash
# Restart the service
pm2 restart filamenteka-api
# or
systemctl restart filamenteka-api
```
### Temporary Frontend Compatibility
The frontend has been updated to handle both old and new API response formats, so it will work with both:
- Old format: `vakum`, `otvoreno` (strings)
- New format: `spulna` (integer), no `otvoreno` field
Once the API is updated, the compatibility layer can be removed.
### Verification
After deployment, verify:
1. API returns `spulna` instead of `vakum`
2. Values are integers, not strings
3. Quantity calculations are correct (`kolicina = refill + spulna`)

View File

@@ -1,101 +0,0 @@
# Filamenteka Deployment Guide
## Architecture
Filamenteka now uses:
- **Frontend**: Next.js deployed on AWS Amplify
- **Database**: PostgreSQL on AWS RDS (publicly accessible)
- **API**: Node.js server that can be run locally or deployed anywhere
## AWS RDS Setup
1. Navigate to the terraform directory:
```bash
cd terraform
```
2. Initialize Terraform:
```bash
terraform init
```
3. Apply the infrastructure:
```bash
terraform apply
```
4. After deployment, get the database connection details:
```bash
terraform output -json
```
5. Get the database password from AWS Secrets Manager:
```bash
aws secretsmanager get-secret-value --secret-id filamenteka-db-credentials --query SecretString --output text | jq -r .password
```
## Running the API
### Option 1: Local Development
1. Create `.env` file in the `api` directory:
```
DATABASE_URL=postgresql://filamenteka_admin:[PASSWORD]@[RDS_ENDPOINT]/filamenteka
JWT_SECRET=your-secret-key-here
ADMIN_PASSWORD=your-password-here
PORT=4000
```
2. Install dependencies and run migrations:
```bash
cd api
npm install
npm run migrate
npm run dev
```
### Option 2: Deploy on a VPS/Cloud Service
You can deploy the Node.js API on:
- Heroku
- Railway
- Render
- AWS EC2
- DigitalOcean
- Any VPS with Node.js
Just ensure the `DATABASE_URL` points to your RDS instance.
## Frontend Configuration
Update `.env.local` to point to your API:
```
NEXT_PUBLIC_API_URL=http://localhost:4000/api # For local
# or
NEXT_PUBLIC_API_URL=https://your-api-domain.com/api # For production
```
## Security Notes
1. **RDS Security**: The current configuration allows access from anywhere (0.0.0.0/0). In production:
- Update the security group to only allow your IP addresses
- Or use a VPN/bastion host
- Or deploy the API in the same VPC and restrict access
2. **API Security**:
- Change the default admin password
- Use strong JWT secrets
- Enable HTTPS in production
## Database Management
Connect to the PostgreSQL database using any client:
```
psql postgresql://filamenteka_admin:[PASSWORD]@[RDS_ENDPOINT]/filamenteka
```
Or use a GUI tool like:
- pgAdmin
- TablePlus
- DBeaver
- DataGrip

View File

@@ -1,126 +0,0 @@
# Project Structure
## Overview
Filamenteka is organized with clear separation between frontend, API, and infrastructure code.
## Directory Structure
```
filamenteka/
├── app/ # Next.js app directory
│ ├── page.tsx # Main page
│ ├── layout.tsx # Root layout
│ └── upadaj/ # Admin pages
│ ├── page.tsx # Admin login
│ ├── dashboard/ # Filament management
│ └── colors/ # Color management
├── src/ # Source code
│ ├── components/ # React components
│ │ ├── FilamentTableV2.tsx
│ │ ├── EnhancedFilters.tsx
│ │ ├── ColorSwatch.tsx
│ │ ├── InventoryBadge.tsx
│ │ └── MaterialBadge.tsx
│ ├── types/ # TypeScript types
│ │ ├── filament.ts
│ │ └── filament.v2.ts
│ ├── services/ # API services
│ │ └── api.ts
│ └── styles/ # Component styles
│ ├── index.css
│ └── select.css
├── api/ # Node.js Express API
│ ├── server.js # Express server
│ ├── migrate.js # Database migration script
│ ├── package.json # API dependencies
│ └── Dockerfile # Docker configuration
├── database/ # Database schemas
│ └── schema.sql # PostgreSQL schema
├── terraform/ # Infrastructure as Code
│ ├── main.tf # Main configuration
│ ├── vpc.tf # VPC and networking
│ ├── rds.tf # PostgreSQL RDS
│ ├── ec2-api.tf # EC2 for API server
│ ├── alb.tf # Application Load Balancer
│ ├── ecr.tf # Docker registry
│ ├── cloudflare-api.tf # Cloudflare DNS
│ └── variables.tf # Variable definitions
├── scripts/ # Utility scripts
│ ├── security/ # Security checks
│ │ └── security-check.js
│ └── pre-commit.sh # Git pre-commit hook
├── config/ # Configuration files
│ └── environments.js # Environment configuration
└── public/ # Static assets
```
## Environment Files
- `.env.development` - Development environment variables
- `.env.production` - Production environment variables
- `.env.local` - Local overrides (not committed)
## Key Concepts
### Architecture
- **Frontend**: Next.js static site hosted on AWS Amplify
- **API**: Node.js Express server running on EC2
- **Database**: PostgreSQL on AWS RDS
- **HTTPS**: Application Load Balancer with ACM certificate
### Data Flow
1. Frontend (Next.js) → HTTPS API (ALB) → Express Server (EC2) → PostgreSQL (RDS)
2. Authentication via JWT tokens
3. Real-time database synchronization
### Infrastructure
- Managed via Terraform
- AWS services: RDS, EC2, ALB, VPC, ECR, Amplify
- Cloudflare for DNS management
- Docker for API containerization
## Development Workflow
1. **Local Development**
```bash
npm run dev
```
2. **Deploy Infrastructure**
```bash
cd terraform
terraform apply
```
3. **Deploy API Updates**
- API automatically pulls latest Docker image every 5 minutes
- Or manually: SSH to EC2 and run deployment script
## Database Management
### Run Migrations
```bash
cd api
npm run migrate
```
### Connect to Database
```bash
psql postgresql://user:pass@rds-endpoint/filamenteka
```
## Security
- No hardcoded credentials
- JWT authentication for admin
- Environment-specific configurations
- Pre-commit security checks
- HTTPS everywhere
- VPC isolation for backend services

Binary file not shown.

View File

@@ -1,45 +0,0 @@
-- Migration: Add new TPU colors
-- Adds Frozen to TPU 90A and Neon Orange to TPU 85A
-- First, add the new colors to the colors table
INSERT INTO colors (name, hex) VALUES
('Frozen', '#B2E2F5'),
('Neon Orange', '#F68A1B')
ON CONFLICT (name)
DO UPDATE SET hex = EXCLUDED.hex;
-- Add TPU 90A filament for Frozen color
INSERT INTO filaments (tip, finish, boja, boja_hex, refill, spulna, kolicina, cena)
SELECT
'TPU' as tip,
'90A' as finish,
'Frozen' as boja,
'#B2E2F5' as boja_hex,
0 as refill,
0 as spulna,
0 as kolicina,
'0' as cena
WHERE NOT EXISTS (
SELECT 1 FROM filaments
WHERE tip = 'TPU'
AND finish = '90A'
AND boja = 'Frozen'
);
-- Add TPU 85A filament for Neon Orange color
INSERT INTO filaments (tip, finish, boja, boja_hex, refill, spulna, kolicina, cena)
SELECT
'TPU' as tip,
'85A' as finish,
'Neon Orange' as boja,
'#F68A1B' as boja_hex,
0 as refill,
0 as spulna,
0 as kolicina,
'0' as cena
WHERE NOT EXISTS (
SELECT 1 FROM filaments
WHERE tip = 'TPU'
AND finish = '85A'
AND boja = 'Neon Orange'
);

View File

@@ -1,255 +0,0 @@
# Improved Data Structure Proposal
## Current Issues
1. Mixed languages (English/Serbian)
2. String fields for numeric/boolean values
3. Inconsistent status representation
4. No proper inventory tracking
5. Missing important metadata
## Proposed Structure
```typescript
interface Filament {
// Identifiers
id: string;
sku?: string; // For internal tracking
// Product Info
brand: string;
type: 'PLA' | 'PETG' | 'ABS' | 'TPU' | 'SILK' | 'CF' | 'WOOD';
material: {
base: 'PLA' | 'PETG' | 'ABS' | 'TPU';
modifier?: 'Silk' | 'Matte' | 'Glow' | 'Wood' | 'CF';
};
color: {
name: string;
hex?: string; // Color code for UI display
pantone?: string; // For color matching
};
// Physical Properties
weight: {
value: number; // 1000 for 1kg, 500 for 0.5kg
unit: 'g' | 'kg';
};
diameter: number; // 1.75 or 2.85
// Inventory Status
inventory: {
total: number; // Total spools
available: number; // Available for use
inUse: number; // Currently being used
locations: {
vacuum: number; // In vacuum storage
opened: number; // Opened but usable
printer: number; // Loaded in printer
};
};
// Purchase Info
pricing: {
purchasePrice?: number;
currency: 'RSD' | 'EUR' | 'USD';
supplier?: string;
purchaseDate?: string;
};
// Condition
condition: {
isRefill: boolean;
openedDate?: string;
expiryDate?: string;
storageCondition: 'vacuum' | 'sealed' | 'opened' | 'desiccant';
humidity?: number; // Last measured
};
// Metadata
tags: string[]; // ['premium', 'engineering', 'easy-print']
notes?: string; // Special handling instructions
images?: string[]; // S3 URLs for photos
// Timestamps
createdAt: string;
updatedAt: string;
lastUsed?: string;
}
```
## Benefits
### 1. **Better Filtering**
```typescript
// Find all sealed PLA under 1kg
filaments.filter(f =>
f.material.base === 'PLA' &&
f.weight.value <= 1000 &&
f.condition.storageCondition === 'vacuum'
)
```
### 2. **Inventory Management**
```typescript
// Get total available filament weight
const totalWeight = filaments.reduce((sum, f) =>
sum + (f.inventory.available * f.weight.value), 0
);
// Find low stock items
const lowStock = filaments.filter(f =>
f.inventory.available <= 1 && f.inventory.total > 0
);
```
### 3. **Color Management**
```typescript
// Group by color for visualization
const colorGroups = filaments.reduce((groups, f) => {
const color = f.color.name;
groups[color] = groups[color] || [];
groups[color].push(f);
return groups;
}, {});
```
### 4. **Usage Tracking**
```typescript
// Find most used filaments
const mostUsed = filaments
.filter(f => f.lastUsed)
.sort((a, b) => new Date(b.lastUsed) - new Date(a.lastUsed))
.slice(0, 10);
```
## Migration Strategy
### Phase 1: Add New Fields (Non-breaking)
```javascript
// Update Lambda to handle both old and new structure
const migrateFilament = (old) => ({
...old,
material: {
base: old.tip || 'PLA',
modifier: old.finish !== 'Basic' ? old.finish : undefined
},
color: {
name: old.boja
},
weight: {
value: 1000, // Default 1kg
unit: 'g'
},
inventory: {
total: parseInt(old.kolicina) || 1,
available: old.otvoreno ? 0 : 1,
inUse: 0,
locations: {
vacuum: old.vakum ? 1 : 0,
opened: old.otvoreno ? 1 : 0,
printer: 0
}
},
condition: {
isRefill: old.refill === 'Da',
storageCondition: old.vakum ? 'vacuum' : (old.otvoreno ? 'opened' : 'sealed')
}
});
```
### Phase 2: Update UI Components
- Create new filter components for material type
- Add inventory status indicators
- Color preview badges
- Storage condition icons
### Phase 3: Enhanced Features
1. **Barcode/QR Integration**: Generate QR codes for each spool
2. **Usage History**: Track which prints used which filament
3. **Alerts**: Low stock, expiry warnings
4. **Analytics**: Cost per print, filament usage trends
## DynamoDB Optimization
### Current Indexes
- brand-index
- tip-index
- status-index
### Proposed Indexes
```terraform
global_secondary_index {
name = "material-color-index"
hash_key = "material.base"
range_key = "color.name"
}
global_secondary_index {
name = "inventory-status-index"
hash_key = "condition.storageCondition"
range_key = "inventory.available"
}
global_secondary_index {
name = "brand-type-index"
hash_key = "brand"
range_key = "material.base"
}
```
## Example Queries
### Find all available green filaments
```javascript
const greenFilaments = await dynamodb.query({
IndexName: 'material-color-index',
FilterExpression: 'contains(color.name, :green) AND inventory.available > :zero',
ExpressionAttributeValues: {
':green': 'Green',
':zero': 0
}
}).promise();
```
### Get inventory summary
```javascript
const summary = await dynamodb.scan({
TableName: TABLE_NAME,
ProjectionExpression: 'brand, material.base, inventory'
}).promise();
const report = summary.Items.reduce((acc, item) => {
const key = `${item.brand}-${item.material.base}`;
acc[key] = (acc[key] || 0) + item.inventory.total;
return acc;
}, {});
```
## UI Improvements
### 1. **Visual Inventory Status**
```tsx
<div className="flex gap-2">
{filament.inventory.locations.vacuum > 0 && (
<Badge icon="vacuum" count={filament.inventory.locations.vacuum} />
)}
{filament.inventory.locations.opened > 0 && (
<Badge icon="box-open" count={filament.inventory.locations.opened} />
)}
</div>
```
### 2. **Color Swatches**
```tsx
<div
className="w-8 h-8 rounded-full border-2"
style={{ backgroundColor: filament.color.hex || getColorFromName(filament.color.name) }}
title={filament.color.name}
/>
```
### 3. **Smart Filters**
- Quick filters: "Ready to use", "Low stock", "Refills only"
- Material groups: "Standard PLA", "Engineering", "Specialty"
- Storage status: "Vacuum sealed", "Open spools", "In printer"
Would you like me to implement this improved structure?

View File

@@ -4,3 +4,21 @@ import '@testing-library/jest-dom'
const { TextEncoder, TextDecoder } = require('util');
global.TextEncoder = TextEncoder;
global.TextDecoder = TextDecoder;
// Mock axios globally
jest.mock('axios', () => ({
create: jest.fn(() => ({
get: jest.fn(),
post: jest.fn(),
put: jest.fn(),
delete: jest.fn(),
interceptors: {
request: {
use: jest.fn()
},
response: {
use: jest.fn()
}
}
}))
}))

View File

@@ -1,56 +0,0 @@
-- Add 1 refill and 1 spulna for each color as PLA Basic filaments
-- Run this with: psql $DATABASE_URL -f scripts/add-basic-refills.sql
-- First show what colors we have
SELECT name, hex FROM colors ORDER BY name;
-- Insert PLA Basic filaments with 1 refill and 1 spulna for each color that doesn't already have one
INSERT INTO filaments (tip, finish, boja, boja_hex, refill, spulna, kolicina, cena)
SELECT
'PLA' as tip,
'Basic' as finish,
c.name as boja,
c.hex as boja_hex,
1 as refill,
1 as spulna,
2 as kolicina, -- 1 refill + 1 spulna
'3999' as cena
FROM colors c
WHERE NOT EXISTS (
SELECT 1 FROM filaments f
WHERE f.tip = 'PLA'
AND f.finish = 'Basic'
AND f.boja = c.name
)
ON CONFLICT DO NOTHING;
-- Update any existing PLA Basic filaments to have 1 refill and 1 spulna
UPDATE filaments
SET refill = 1,
spulna = 1,
kolicina = 2 -- Update quantity to reflect 1 refill + 1 spulna
WHERE tip = 'PLA'
AND finish = 'Basic'
AND (refill = 0 OR spulna = 0);
-- Show summary
SELECT
'Total PLA Basic filaments with refills and spulna' as description,
COUNT(*) as count
FROM filaments
WHERE tip = 'PLA'
AND finish = 'Basic'
AND refill = 1
AND spulna = 1;
-- Show all PLA Basic filaments
SELECT
boja as color,
refill,
spulna,
kolicina as quantity,
cena as price
FROM filaments
WHERE tip = 'PLA'
AND finish = 'Basic'
ORDER BY boja;

View File

@@ -1,17 +0,0 @@
#!/bin/bash
echo "Running sale fields migration..."
# Get RDS endpoint from AWS
RDS_ENDPOINT=$(aws rds describe-db-instances --region eu-central-1 --db-instance-identifier filamenteka --query 'DBInstances[0].Endpoint.Address' --output text)
# Get database credentials from Secrets Manager
DB_CREDS=$(aws secretsmanager get-secret-value --region eu-central-1 --secret-id filamenteka-db-credentials --query 'SecretString' --output text)
DB_USER=$(echo $DB_CREDS | jq -r '.username')
DB_PASS=$(echo $DB_CREDS | jq -r '.password')
DB_NAME=$(echo $DB_CREDS | jq -r '.database')
# Run the migration
PGPASSWORD="$DB_PASS" psql -h $RDS_ENDPOINT -U $DB_USER -d $DB_NAME -f database/migrations/014_add_sale_fields.sql
echo "Migration completed!"