Development
Installation
How to set up and run Envision Portal locally for development.
Prerequisites
- Node.js 22 (managed via mise)
- pnpm
- PostgreSQL database
- Azure storage account with two containers: one for drafts, one for published datasets
- SMTP email server
Optional for full-stack platform development and operations:
- Docker for containerized development and deployment parity
- Azure resources for compute and storage-backed deployment workflows
Setup
Clone the repository and install dependencies:
git clone <your-repo-url>
cd envision-portal
pnpm install
Copy the example environment file and fill in the required values:
cp .env.example .env
Run database migrations:
pnpm prisma:migrate:deploy
Start the development server:
pnpm dev
The app runs at http://localhost:3000.
Environment Variables
| Variable | Description |
|---|---|
DATABASE_URL | PostgreSQL connection string |
MAIL_HOST | SMTP host |
MAIL_PORT | SMTP port |
MAIL_USER | SMTP username |
MAIL_PASS | SMTP password |
MAIL_FROM | From address for outgoing emails |
EMAIL_VERIFICATION_DOMAIN | Base URL used in verification email links |
AZURE_DRAFT_* | Azure Data Lake credentials for draft storage |
AZURE_PUBLISHED_* | Azure Data Lake credentials for published storage |
NUXT_SITE_URL | Public URL of the application |
NUXT_SITE_ENV | Environment: development, staging, or production |
EXTERNAL_API_KEY | Optional key for external API integrations |
Other Commands
| Command | Description |
|---|---|
pnpm build | Build for production |
pnpm preview | Preview the production build locally |
pnpm prisma:migrate:deploy | Apply pending database migrations |
pnpm prisma:studio | Open Prisma Studio to browse the database |
Docker
A Dockerfile is included in the project root. Build and run the container:
docker build -t envision-portal .
docker run -p 3000:3000 --env-file .env envision-portal
Deployment via Kamal is configured in the .kamal/ directory and config/deploy.yml.
Technical Stack Notes
The Envision Portal architecture is built around Nuxt and Nitro for web and API delivery, with Azure-backed storage for dataset files and containerized deployment workflows for maintainability.
Planned ecosystem components include:
- Search indexing services to improve dataset discovery
- Dedicated Python services for AI, validation, and data-processing workloads
- Extended automation for external dataset indexing and metadata enrichment