Roadmap
- #429
- Copy Files to Image
- CouchDB
- Cron Restart
- Elasticsearch
- Grafana/Graphite/Statsd
- HTTP Auth
- Maintenance mode
- Meilisearch
- Memcached
- Nats
- Omnisci
- Pushpin
- RabbitMQ
- Redirect
- Registry
- RethinkDB
- Scheduler Kubernetes
- Scheduler Nomad
- Solr
- SSH Hostkeys
- Typesense
#425 opened by pavanbhaskardev
Currently, the GitHub App deployment is triggered for the x-github-event: push
. To improve deployment automation, we need to also trigger deployments when a fork-sync
event occurs.
Proposed Solution:
- Update the event handler to check for the
fork-sync
event. - Ensure deployment is triggered when a
fork-sync
is received, similar to how it works forpush
events. - Test that deployments occur successfully for both event types.
Motivation:
- Automatically deploying on fork-sync keeps forked repositories up-to-date and consistent with the main branch.
Additional Context:
- This will help maintain parity between forks and the source repo, improving developer experience and CI workflows.
#422 opened by pavanbhaskardev
We have a few approaches to address the uninstallation of a plugin when it's actively being used:
- Fully restrict uninstallation while the plugin is in use.
- Allow uninstallation, but display an alert informing users that it may impact the services utilizing the plugin. (If feasible, we can disable related service operations if the plugin is not installed.)
- Automatically remove any dependent services when the plugin is uninstalled.
Note: For now, we’ll proceed with option 1, which is to fully restrict uninstallation.
#418 opened by manikanta9176
Synchronize Dokku actions performed on the server with the website. The solution should:
- Display Dokku data by executing Dokku commands on the server and showing the results on the site.
- Provide a 'Sync' button to manually synchronize server data to the website.
- Explore and propose any better solutions for seamless sync between server and website.
#417 opened by manikanta9176
Provide a secure way for users to access the server terminal through the platform. This should include authentication and appropriate permission controls to ensure only authorized access.
#416 opened by manikanta9176
Implement a feature to enable users to add new Dokku plugins through the system interface. This should allow for easier integration and management of Dokku plugins directly from the platform.
#415 opened by manikanta9176
Description:
Implement a Redis-based caching layer for storing server details in dFlow. The cache should automatically revalidate when:
- No data exists in Redis, or
- A new server is added.
This will reduce database queries, speed up UI/API responses, and keep data fresh without manual cache resets.
Acceptance Criteria:
- Store server details in Redis upon initial fetch.
- When Redis returns no data, fetch from the database and repopulate Redis.
- On server creation, trigger a cache refresh for the affected data set.
- Use TTL (configurable) to ensure data doesn’t get stale indefinitely.
- Ensure cache invalidation on relevant updates (rename, delete, status change).
- Add logs for cache hits, misses, and refresh events.
Benefits:
- Improves performance by reducing DB load.
- Keeps server data up-to-date without manual intervention.
- Provides a consistent and predictable cache refresh flow.
#414 opened by charanm927
Description:
Enable dFlow to connect databases hosted on separate servers via Tailnet (Tailscale network). By default, create all user databases on dFlow-managed servers and connect services to them securely using Tailnet. This approach improves security, performance, and simplifies DB networking without exposing databases to the public internet.
Acceptance Criteria:
- Configure Tailnet to allow secure, private connections between servers hosting services and databases.
- Automatically add database server(s) to the same Tailnet as application servers.
- Create all user databases on dFlow-managed DB servers by default.
- Generate and inject private Tailnet connection strings into services.
- Ensure connections remain functional during Tailnet IP changes (use MagicDNS where possible).
- Log connection setup and errors for troubleshooting.
Benefits:
- Improves database security by removing public exposure.
- Simplifies multi-server DB networking for users.
- Centralizes database hosting on optimized servers.
#413 opened by charanm927
Description:
Extend dFlow to support provisioning and managing external databases from popular hosted providers, allowing users to install and connect their databases outside dFlow-managed servers. This will give users flexibility in choosing specialized DB hosting while still integrating fully with their dFlow projects.
Acceptance Criteria:
- Add Neon integration for PostgreSQL.
- Add MongoDB Atlas integration for MongoDB.
- Add Turso integration for SQLite.
- Allow users to provision new DB instances directly from dFlow UI.
- Store and manage connection credentials securely.
- Automatically inject DB connection strings into selected services.
- Allow linking of existing external DBs (import connection details manually).
Benefits:
- Gives users flexibility to choose best-in-class database hosting.
- Reduces server load by offloading DB hosting to external providers.
- Enables globally distributed databases with minimal setup.
#412 opened by charanm927
Description:
Move the dFlow marketing website from its current hosting setup to run entirely on the dFlow App infrastructure. This will consolidate hosting, simplify deployment workflows, and allow us to manage the marketing site using the same platform as other dFlow projects.
Acceptance Criteria:
- Set up a new service in dFlow for the marketing website.
- Configure build settings, environment variables, and domains.
- Ensure SSL and CDN caching are properly configured.
- Test staging deployment before production cutover.
- Migrate analytics, forms, and integrations without downtime.
- Decommission old hosting after migration is confirmed.
Benefits:
- Reduces hosting complexity by using dFlow itself.
- Demonstrates real-world usage of the dFlow platform.
- Improves deployment control and visibility.
#411 opened by charanm927
Description:
Migrate all previous projects hosted on Railway to dFlow by creating and using a dedicated ContenQL account. This migration should ensure all project configurations, environment variables, and services are replicated in dFlow while maintaining service availability during the transition.
Acceptance Criteria:
- Create a dedicated ContenQL account in dFlow for the migrated projects.
- Export project configurations, environment variables, and deployment settings from Railway.
- Recreate services and configurations in the ContenQL account within dFlow.
- Migrate associated databases and persistent storage.
- Verify all services are functional post-migration.
- Log migration steps for audit purposes.
Benefits:
- Centralizes management of migrated projects under the ContenQL account.
- Ensures smooth transition from Railway to dFlow with minimal downtime.
- Maintains consistency and security of migrated environments.
#410 opened by charanm927
Description:
Create a migration script that allows users to seamlessly move their existing projects from Railway to any dFlow instance (cloud or self-hosted). The script should fetch project configurations, environment variables, and service definitions from Railway, then recreate them in dFlow with minimal manual input.
Acceptance Criteria:
- Accept Railway API key and target dFlow instance credentials/URL as input.
- Fetch project details, environment variables, and deployment settings from Railway.
- Map Railway services to equivalent dFlow services (Docker/Dokku/etc.).
- Transfer environment variables and secrets securely.
- Optionally migrate persistent data (databases, volumes) if applicable.
- Provide progress logging and a final migration summary.
- Support both cloud-hosted and self-hosted dFlow instances.
Benefits:
- Makes switching from Railway to dFlow frictionless.
- Saves time by automating repetitive setup tasks.
- Encourages adoption of self-hosted or cloud dFlow instances.
#409 opened by charanm927
Description:
Add functionality to migrate databases (internal or external) from one server to another within dFlow. This will help teams move workloads, scale infrastructure, and perform maintenance without manual DB export/import steps.
Acceptance Criteria:
- Support MySQL, PostgreSQL, and MongoDB in the first version.
- Detect source and target server configurations automatically.
- Option to migrate databases with minimal downtime.
- Transfer associated credentials and update dependent services in dFlow.
- Validate migrated data integrity after transfer.
- Log migration steps and results for auditing.
Benefits:
- Simplifies server upgrades and replacements.
- Reduces downtime during infrastructure changes.
- Ensures data consistency and minimizes manual intervention.
#408 opened by charanm927
Description:
Add the ability to configure and run backups for external databases connected to dFlow projects. This will allow teams to securely back up MySQL, PostgreSQL, MongoDB, and other external DB instances to supported storage providers (S3, Backblaze, etc.) without relying on server-level scripts.
Acceptance Criteria:
- Allow users to connect an external database by hostname, port, credentials, and type.
- Support MySQL, PostgreSQL, and MongoDB in the first release.
- Configure backup destinations (S3, Backblaze B2, GCS, local server storage).
- Allow manual and scheduled backups.
- Encrypt backups in transit and at rest.
- Provide one-click restore to the same or different DB instance.
- Log all backup and restore actions for audit purposes.
Benefits:
- Protects critical data stored outside dFlow-managed servers.
- Centralizes backup management for multiple database types.
- Reduces the risk of data loss for externally hosted DBs.
#407 opened by charanm927
Description:
Upgrade dFlow’s backend to the latest version of Payload CMS and replace the current custom soft-delete/trash system with Payload’s native trash feature. This will simplify code maintenance, improve performance, and ensure better compatibility with future Payload updates.
Acceptance Criteria:
- Upgrade Payload CMS to the latest stable release.
- Replace custom trash/soft-delete logic with Payload’s built-in trash functionality.
- Migrate existing “deleted” data to be compatible with the native trash system.
- Test all collections for proper trash/restore behavior.
- Remove unused code related to the old trash implementation.
- Update admin UI labels and flows if needed.
Benefits:
- Reduces maintenance overhead by removing custom code.
- Ensures compatibility with future Payload CMS releases.
- Improves reliability of trash and restore operations.
#406 opened by charanm927
Description:
Add automated and on-demand cleanup tools for Docker, Dokku, and dFlow resources to free up space, remove unused data, and maintain optimal server performance. This will help prevent bloated environments and reduce potential deployment issues.
Acceptance Criteria:
Docker Cleanup:
- Remove unused images, containers, volumes, and networks.
- Option for safe mode (only remove items older than X days).
Dokku Cleanup:
- Remove unused buildpacks, caches, and old releases.
- Clear any dangling Dokku artifacts.
dFlow Cleanup:
- Purge old deployment logs beyond retention limit.
- Remove orphaned files and unused backups.
- Clear stale temporary data.
Benefits:
- Frees up disk space and improves server performance.
- Reduces risk of failed builds due to low storage.
- Keeps environments tidy and easier to maintain.
#405 opened by charanm927
Description:
Introduce a draft state for architecture configurations in dFlow so users can work on server/service architecture changes without immediately applying them to the live environment. This allows teams to prepare and review setups before finalizing.
Acceptance Criteria:
- Add “Draft” status to architecture definitions.
- Allow users to save changes in draft without affecting the live configuration.
- Provide a clear UI indicator when an architecture is in draft mode.
- Enable publishing of a draft to make it active.
- Store and track multiple draft versions if needed.
Benefits:
- Reduces risk from incomplete or incorrect configuration changes.
- Enables collaborative review before deployment.
- Improves workflow for complex architecture planning.
#403 opened by charanm927
Description: Add support for automated Docker backups in dFlow using Restic. This will allow users to create secure, incremental, and deduplicated backups of their container volumes, with storage in supported backend providers (S3, etc.).
Acceptance Criteria:
- Detect and list volumes for each Docker service in dFlow.
- Allow users to configure Restic backup settings (schedule, retention policy, storage backend).
- Run backups automatically on schedule and allow manual triggers.
- Support restore operations to the same or a different server.
- Encrypt backups with user-defined or system-generated keys.
- Log all backup and restore activities.
Benefits:
- Provides reliable and secure backups for Docker data.
- Enables quick recovery from data loss or corruption.
- Minimizes storage costs through Restic’s deduplication.
#402 opened by charanm927
Description: Integrate Ansible-based configuration management into dFlow to standardize and automate server setup, deployment tasks, and environment consistency. This will help ensure reproducible environments across all servers and reduce manual configuration errors.
Acceptance Criteria:
- Define Ansible playbooks for common server provisioning tasks.
- Store and version-control configuration files in a central location.
- Allow team/server-specific Ansible overrides for custom setups.
- Provide a way to trigger Ansible runs from within dFlow (UI and/or API).
- Log playbook execution results and any errors.
Benefits:
- Consistent and repeatable server setups.
- Easier maintenance of large server fleets.
- Reduced human error during provisioning and updates.
#401 opened by charanm927
Description:
When a server is deleted in dFlow, automatically remove the associated Tailscale machine from the Tailscale network. This ensures that no orphaned devices remain connected and avoids potential security or network clutter issues.
Acceptance Criteria:
- Detect Tailscale device(s) linked to a server upon deletion.
- Automatically remove those devices from Tailscale.
- Confirm deletion was successful and log the action.
- Handle cases where the Tailscale device is already missing gracefully.
Benefits:
- Maintains a clean and secure Tailscale network.
- Prevents unused devices from consuming Tailscale node slots.
- Reduces manual cleanup after server removal.
#400 opened by charanm927
Description:
Introduce functionality in dFlow that allows users to purge all Redis queues either for:
- A specific server, or
- All servers within a team.
This should give users more control over their environments and help in clearing stuck jobs, failed tasks, or outdated data without manual Redis intervention.
Acceptance Criteria:
- Add purge option in the server settings.
- Add purge option in the team-level settings for all servers.
- Prompt confirmation before execution to prevent accidental data loss.
- Log purge actions for audit purposes.
- Ensure operation is fast and does not disrupt other running services unnecessarily.
Benefits:
- Simplifies queue management for users.
- Quickly resolves issues caused by stuck or corrupted jobs.
- Reduces reliance on manual Redis commands.
#399 opened by charanm927
Description:
Update the current bottom-right toast notification (which says "Syncing") into a persistent, expandable bubble UI. This bubble should also be capable of displaying terminal output, so users can monitor real-time logs while syncing or performing other actions.
Acceptance Criteria:
- Replace the current “Syncing” toast with a bubble component.
- Bubble should be docked in the bottom-right corner and always visible when active.
- Clicking/expanding the bubble should reveal the embedded terminal view.
- Terminal should stream relevant logs in real-time (sync, build, deploy, etc.).
Benefits:
- Consolidates status indicators and logs into one accessible component.
- Improves visibility into ongoing processes without leaving the page.
- Provides a more modern, interactive UI experience in dFlow.
#398 opened by charanm927
Currently, the default monitoring in the project does not include any alerting mechanism. To improve system reliability and user awareness, add alert functionalities to the default monitoring setup.
Proposed changes
- Integrate display of simple system alerts originating from Beszel in the frontend monitoring tab.
- These alerts will only be shown in the monitoring tab and will not trigger notifications.
Benefits
- Improved observability for users by surfacing important system alerts from Beszel in the UI.
- No notification noise; alerts are strictly visual within the monitoring tab.
Additional context
This feature will help users proactively manage their workflows and maintain system health by providing visibility to Beszel alerts.
#390 opened by manikanta9176
Summary
Create a centralized JSON file that defines and fixes the versions of all third-party packages used across the project, including beszel (hub and agent), netdata, dokku, buildkit, railpack, dokku plugins, and others.
Motivation
Managing package versions manually can lead to inconsistencies and unexpected behavior across servers. A single JSON file will allow us to:
- Fix and track versions for all dependencies in one place.
- Simplify the update process for all servers by updating the JSON file when releasing updates.
- Enable automated update mechanisms (either triggered by release or user action) to use the JSON file as the authoritative source for package versions.
Acceptance Criteria
- Create a JSON file listing all relevant packages and their versions.
- Implement logic to update servers based on the versions specified in the JSON file.
- Document the update process for maintainers and users.
- Ensure future releases only require updating the JSON file to propagate new versions.
Additional Context
This feature will provide greater reliability and efficiency for both maintainers and users when updating server environments.
Packages to include (not exhaustive):
- beszel (hub and agent)
- netdata
- dokku
- buildkit
- railpack
- dokku plugin versions
- Any other relevant packages
Please discuss additional package candidates and implementation details as needed.
#387 opened by manikanta9176
#385 opened by jagadeesh507
- Currently reset-onboarding is triggered button is getting disabled
- User has no info what's going in background
- Show a alert in server-details page that reset-onboarding triggered for server
#384 opened by pavanbhaskardev
Many users don’t have Discord accounts. Please consider using GitHub for eligibility instead, so more people can participate.
#376 opened by manikanta9176
Display a toast notification to the user when a new version of the app is available. This will encourage users to refresh and use the latest build, preventing the use of stale versions.
Benefits:
- Ensures users always have access to the latest features and bug fixes.
- Improves user experience by avoiding issues caused by outdated builds.
Acceptance Criteria:
- Detect when a new version of the app is available.
- Show a clear and actionable toast notification prompting users to refresh or reload.
- The toast should only appear when the user is on a stale build.
#374 opened by pavanbhaskardev
Enhance the cloud-init script to support skipping any setup step using configurable flags. This improves flexibility for deployments, allowing users to customize which steps are executed.
Tasks:
- Refactor cloud-init script to accept flags for each major step
- Document available flags and their effects
- Ensure correct behavior when multiple steps are skipped or combined
- Add tests for flag-based step execution
This will provide better control over the initialization process for different environments or requirements.
#370 opened by pavanbhaskardev