Roadmap
Problem
Currently, auto-login functionality does not work when sessions are enabled in the PayloadCMS configuration. Auto-login should support session-based authentication.
Proposed Solution
- Enable sessions in the Payload configuration.
- Update the auto-login route handler to create a new session for the user when auto-login is performed.
#485 opened by pavanbhaskardev
Problem
Currently, users cannot restore a backup to a newly created database. There is also no validation to prevent restoring a backup file of the wrong type (e.g., restoring a MongoDB backup to a PostgreSQL database).
Proposed Solution
- Allow users to create a new database instance using an existing backup file.
- Ensure the new database contains all the data from the backup file.
- Implement backup file type validation to prevent users from restoring backups to incompatible database types (e.g., prevent restoring a MongoDB backup to a PostgreSQL database).
#484 opened by pavanbhaskardev
Problem
The current backups UI does not display the database backup type, and backups for deleted services are grouped as deleted services, causing loss of detailed information.
Proposed Solution
- Display the database backup type in the backups UI so users can easily identify backup types.
- Update the UI for deleted services backups to show all available details, rather than grouping them or losing information.
#483 opened by pavanbhaskardev
#482 opened by jagadeesh507
Users get 'Unsupported OS Version' during onboarding when running on Proxmox or LXC containers. This happens because we use lsb_release
to detect the OS, but Proxmox doesn’t include it by default.
Quick workaround: users can install it manually with sudo apt update && sudo apt install -y lsb-release
.
Expected fix: update onboarding to fallback to /etc/os-release or another method if 'lsb_release' is not available.
#481 opened by manikanta9176
Currently, the nginx configuration for servers has default values such as client-max-body-size: 1m
, which restricts clients from uploading files larger than 1MB. There is a need to support customization for all available nginx parameters to allow more flexible deployments.
Requested Feature:
- Add support for customizing all nginx properties as documented in the Dokku nginx proxy config guide.
Example:
- Allow users to set
client-max-body-size
to desired values (e.g., 10m, 100m, etc.) - Make other nginx parameters configurable through environment variables or configuration files.
Benefit:
- Users can tailor nginx's behavior to their application's needs, removing hardcoded limits and enabling advanced configurations.
Acceptance Criteria:
- All nginx properties listed in the Dokku documentation can be customized by users.
- Document how users can set each property.
- Provide sensible defaults to avoid breaking existing deployments.
#480 opened by pavanbhaskardev
When viewing the projects page on mobile devices (tested using iPhone 12 Pro simulation in Chrome DevTools), the layout appears broken or not fully responsive. The project details (like status badges and tabs such as General, Logs, Deployments, Backup) do not align properly, and some elements overlap.
Steps to Reproduce:
Open the projects view page (e.g., /project/[projectId]) in Chrome.
Toggle device toolbar and select iPhone 12 Pro.
<img width="959" height="412" alt="Image" src="https://github.com/user-attachments/assets/e6e6d39a-aaba-4d04-bbd7-f9e9bc56c1bb" />#479 opened by jagadeesh507
-
Make as Default
button always disabled in Domains. Reproduction link - When adding a new domain, it is setting as the default domain by default. Reproduction link
- When clicking on
Generate default Domain
in onboarding gettingundefined.up.dflow.sh
if connection type isSSH
#432 opened by jagadeesh507
- #429
- Copy Files to Image
- CouchDB
- Cron Restart
- Elasticsearch
- Grafana/Graphite/Statsd
- HTTP Auth
- Maintenance mode
- Meilisearch
- Memcached
- Nats
- Omnisci
- Pushpin
- RabbitMQ
- Redirect
- Registry
- RethinkDB
- Scheduler Kubernetes
- Scheduler Nomad
- Solr
- SSH Hostkeys
- Typesense
#425 opened by pavanbhaskardev
Currently, the GitHub App deployment is triggered for the x-github-event: push
. To improve deployment automation, we need to also trigger deployments when a fork-sync
event occurs.
Proposed Solution:
- Update the event handler to check for the
fork-sync
event. - Ensure deployment is triggered when a
fork-sync
is received, similar to how it works forpush
events. - Test that deployments occur successfully for both event types.
Motivation:
- Automatically deploying on fork-sync keeps forked repositories up-to-date and consistent with the main branch.
Additional Context:
- This will help maintain parity between forks and the source repo, improving developer experience and CI workflows.
#422 opened by pavanbhaskardev
Synchronize Dokku actions performed on the server with the website. The solution should:
- Display Dokku data by executing Dokku commands on the server and showing the results on the site.
- Provide a 'Sync' button to manually synchronize server data to the website.
- Explore and propose any better solutions for seamless sync between server and website.
#417 opened by manikanta9176
Provide a secure way for users to access the server terminal through the platform. This should include authentication and appropriate permission controls to ensure only authorized access.
#416 opened by manikanta9176
Implement a feature to enable users to add new Dokku plugins through the system interface. This should allow for easier integration and management of Dokku plugins directly from the platform.
#415 opened by manikanta9176
Description:
Implement a Redis-based caching layer for storing server details in dFlow. The cache should automatically revalidate when:
- No data exists in Redis, or
- A new server is added.
This will reduce database queries, speed up UI/API responses, and keep data fresh without manual cache resets.
Acceptance Criteria:
- Store server details in Redis upon initial fetch.
- When Redis returns no data, fetch from the database and repopulate Redis.
- On server creation, trigger a cache refresh for the affected data set.
- Use TTL (configurable) to ensure data doesn’t get stale indefinitely.
- Ensure cache invalidation on relevant updates (rename, delete, status change).
- Add logs for cache hits, misses, and refresh events.
Benefits:
- Improves performance by reducing DB load.
- Keeps server data up-to-date without manual intervention.
- Provides a consistent and predictable cache refresh flow.
#414 opened by charanm927
Description:
Enable dFlow to connect databases hosted on separate servers via Tailnet (Tailscale network). By default, create all user databases on dFlow-managed servers and connect services to them securely using Tailnet. This approach improves security, performance, and simplifies DB networking without exposing databases to the public internet.
Acceptance Criteria:
- Configure Tailnet to allow secure, private connections between servers hosting services and databases.
- Automatically add database server(s) to the same Tailnet as application servers.
- Create all user databases on dFlow-managed DB servers by default.
- Generate and inject private Tailnet connection strings into services.
- Ensure connections remain functional during Tailnet IP changes (use MagicDNS where possible).
- Log connection setup and errors for troubleshooting.
Benefits:
- Improves database security by removing public exposure.
- Simplifies multi-server DB networking for users.
- Centralizes database hosting on optimized servers.
#413 opened by charanm927
Description:
Extend dFlow to support provisioning and managing external databases from popular hosted providers, allowing users to install and connect their databases outside dFlow-managed servers. This will give users flexibility in choosing specialized DB hosting while still integrating fully with their dFlow projects.
Acceptance Criteria:
- Add Neon integration for PostgreSQL.
- Add MongoDB Atlas integration for MongoDB.
- Add Turso integration for SQLite.
- Allow users to provision new DB instances directly from dFlow UI.
- Store and manage connection credentials securely.
- Automatically inject DB connection strings into selected services.
- Allow linking of existing external DBs (import connection details manually).
Benefits:
- Gives users flexibility to choose best-in-class database hosting.
- Reduces server load by offloading DB hosting to external providers.
- Enables globally distributed databases with minimal setup.
#412 opened by charanm927
Description:
Move the dFlow marketing website from its current hosting setup to run entirely on the dFlow App infrastructure. This will consolidate hosting, simplify deployment workflows, and allow us to manage the marketing site using the same platform as other dFlow projects.
Acceptance Criteria:
- Set up a new service in dFlow for the marketing website.
- Configure build settings, environment variables, and domains.
- Ensure SSL and CDN caching are properly configured.
- Test staging deployment before production cutover.
- Migrate analytics, forms, and integrations without downtime.
- Decommission old hosting after migration is confirmed.
Benefits:
- Reduces hosting complexity by using dFlow itself.
- Demonstrates real-world usage of the dFlow platform.
- Improves deployment control and visibility.
#411 opened by charanm927
Description:
Migrate all previous projects hosted on Railway to dFlow by creating and using a dedicated ContenQL account. This migration should ensure all project configurations, environment variables, and services are replicated in dFlow while maintaining service availability during the transition.
Acceptance Criteria:
- Create a dedicated ContenQL account in dFlow for the migrated projects.
- Export project configurations, environment variables, and deployment settings from Railway.
- Recreate services and configurations in the ContenQL account within dFlow.
- Migrate associated databases and persistent storage.
- Verify all services are functional post-migration.
- Log migration steps for audit purposes.
Benefits:
- Centralizes management of migrated projects under the ContenQL account.
- Ensures smooth transition from Railway to dFlow with minimal downtime.
- Maintains consistency and security of migrated environments.
#410 opened by charanm927
Description:
Create a migration script that allows users to seamlessly move their existing projects from Railway to any dFlow instance (cloud or self-hosted). The script should fetch project configurations, environment variables, and service definitions from Railway, then recreate them in dFlow with minimal manual input.
Acceptance Criteria:
- Accept Railway API key and target dFlow instance credentials/URL as input.
- Fetch project details, environment variables, and deployment settings from Railway.
- Map Railway services to equivalent dFlow services (Docker/Dokku/etc.).
- Transfer environment variables and secrets securely.
- Optionally migrate persistent data (databases, volumes) if applicable.
- Provide progress logging and a final migration summary.
- Support both cloud-hosted and self-hosted dFlow instances.
Benefits:
- Makes switching from Railway to dFlow frictionless.
- Saves time by automating repetitive setup tasks.
- Encourages adoption of self-hosted or cloud dFlow instances.
#409 opened by charanm927
Description:
Add functionality to migrate databases (internal or external) from one server to another within dFlow. This will help teams move workloads, scale infrastructure, and perform maintenance without manual DB export/import steps.
Acceptance Criteria:
- Support MySQL, PostgreSQL, and MongoDB in the first version.
- Detect source and target server configurations automatically.
- Option to migrate databases with minimal downtime.
- Transfer associated credentials and update dependent services in dFlow.
- Validate migrated data integrity after transfer.
- Log migration steps and results for auditing.
Benefits:
- Simplifies server upgrades and replacements.
- Reduces downtime during infrastructure changes.
- Ensures data consistency and minimizes manual intervention.
#408 opened by charanm927
Description:
Add the ability to configure and run backups for external databases connected to dFlow projects. This will allow teams to securely back up MySQL, PostgreSQL, MongoDB, and other external DB instances to supported storage providers (S3, Backblaze, etc.) without relying on server-level scripts.
Acceptance Criteria:
- Allow users to connect an external database by hostname, port, credentials, and type.
- Support MySQL, PostgreSQL, and MongoDB in the first release.
- Configure backup destinations (S3, Backblaze B2, GCS, local server storage).
- Allow manual and scheduled backups.
- Encrypt backups in transit and at rest.
- Provide one-click restore to the same or different DB instance.
- Log all backup and restore actions for audit purposes.
Benefits:
- Protects critical data stored outside dFlow-managed servers.
- Centralizes backup management for multiple database types.
- Reduces the risk of data loss for externally hosted DBs.
#407 opened by charanm927
Description:
Add automated and on-demand cleanup tools for Docker, Dokku, and dFlow resources to free up space, remove unused data, and maintain optimal server performance. This will help prevent bloated environments and reduce potential deployment issues.
Acceptance Criteria:
Docker Cleanup:
- Remove unused images, containers, volumes, and networks.
- Option for safe mode (only remove items older than X days).
Dokku Cleanup:
- Remove unused buildpacks, caches, and old releases.
- Clear any dangling Dokku artifacts.
dFlow Cleanup:
- Purge old deployment logs beyond retention limit.
- Remove orphaned files and unused backups.
- Clear stale temporary data.
Benefits:
- Frees up disk space and improves server performance.
- Reduces risk of failed builds due to low storage.
- Keeps environments tidy and easier to maintain.
#405 opened by charanm927
Description: Integrate Ansible-based configuration management into dFlow to standardize and automate server setup, deployment tasks, and environment consistency. This will help ensure reproducible environments across all servers and reduce manual configuration errors.
Acceptance Criteria:
- Define Ansible playbooks for common server provisioning tasks.
- Store and version-control configuration files in a central location.
- Allow team/server-specific Ansible overrides for custom setups.
- Provide a way to trigger Ansible runs from within dFlow (UI and/or API).
- Log playbook execution results and any errors.
Benefits:
- Consistent and repeatable server setups.
- Easier maintenance of large server fleets.
- Reduced human error during provisioning and updates.
#401 opened by charanm927
Currently, the default monitoring in the project does not include any alerting mechanism. To improve system reliability and user awareness, add alert functionalities to the default monitoring setup.
Proposed changes
- Integrate display of simple system alerts originating from Beszel in the frontend monitoring tab.
- These alerts will only be shown in the monitoring tab and will not trigger notifications.
Benefits
- Improved observability for users by surfacing important system alerts from Beszel in the UI.
- No notification noise; alerts are strictly visual within the monitoring tab.
Additional context
This feature will help users proactively manage their workflows and maintain system health by providing visibility to Beszel alerts.
#390 opened by manikanta9176
Summary
Create a centralized JSON file that defines and fixes the versions of all third-party packages used across the project, including beszel (hub and agent), netdata, dokku, buildkit, railpack, dokku plugins, and others.
Motivation
Managing package versions manually can lead to inconsistencies and unexpected behavior across servers. A single JSON file will allow us to:
- Fix and track versions for all dependencies in one place.
- Simplify the update process for all servers by updating the JSON file when releasing updates.
- Enable automated update mechanisms (either triggered by release or user action) to use the JSON file as the authoritative source for package versions.
Acceptance Criteria
- Create a JSON file listing all relevant packages and their versions.
- Implement logic to update servers based on the versions specified in the JSON file.
- Document the update process for maintainers and users.
- Ensure future releases only require updating the JSON file to propagate new versions.
Additional Context
This feature will provide greater reliability and efficiency for both maintainers and users when updating server environments.
Packages to include (not exhaustive):
- beszel (hub and agent)
- netdata
- dokku
- buildkit
- railpack
- dokku plugin versions
- Any other relevant packages
Please discuss additional package candidates and implementation details as needed.
#387 opened by manikanta9176
- Currently reset-onboarding is triggered button is getting disabled
- User has no info what's going in background
- Show a alert in server-details page that reset-onboarding triggered for server
#384 opened by pavanbhaskardev
Many users don’t have Discord accounts. Please consider using GitHub for eligibility instead, so more people can participate.
#376 opened by manikanta9176
Display a toast notification to the user when a new version of the app is available. This will encourage users to refresh and use the latest build, preventing the use of stale versions.
Benefits:
- Ensures users always have access to the latest features and bug fixes.
- Improves user experience by avoiding issues caused by outdated builds.
Acceptance Criteria:
- Detect when a new version of the app is available.
- Show a clear and actionable toast notification prompting users to refresh or reload.
- The toast should only appear when the user is on a stale build.
#374 opened by pavanbhaskardev