Rackspace Deployment Mirror Toolkit This toolkit helps you mirror an existing Rackspace OSPC footprint into a repeatable deployment plan for Flex. It starts by exporting a full Rackspace account inventory, then builds a server-by-server mapping to target platform flavors and images, including block storage behavior (boot-from-volume vs local boot, data volume attachment paths, and sizing). It keeps outputs in practical CSV form so teams can review, filter, and adjust decisions in familiar tools like Excel, then visualize everything in a lightweight dashboard for faster validation and planning. From there, it moves from planning to execution with guardrails. You can explicitly exclude workloads, generate tenant-safe OpenStack deployment artifacts, and run pre-deploy validation to catch bad mappings before anything is created. After deployment, the post-deploy verifier compares the live environment against the plan and produces a report of passes, warnings, and failures. The result is a transparent, auditable process for building new blank-slate resources on Flex that mirror the source design. This toolkit helps you: - Export current Rackspace OSPC account inventory to CSV - Map source (OSPC) servers to target platform (Flex) flavors and OS recommendations - Map source (OSPC) block storage behavior to target platform (Flex) volume actions - Map source Cloud Load Balancers and members to target Octavia LB constructs - Review results in a browser dashboard - Generate a tenant-safe OpenStack deployment script and plan to mirror source footprint on new resources - Generate a paired rollback shell script for fast reverse-order cleanup Included Scripts - account_overview.py - flavor_mapper.py - generate_project_deploy_script.py - validate_migration_inputs.py - verify_post_deploy.py - dashboard/ (HTML/JS/CSS viewer) Prerequisites - Python 3.9+ - Install dependencies from requirements/requirements.txt - Access credentials for source Rackspace account - OpenStack CLI configured for the target Flex tenant project (for deployment step) Install dependencies: pip3 install -r requirements/requirements.txt Run the workflow dashboard: python3 workflow_dashboard/app.py Expected startup output includes the URL to open: * Serving Flask app 'app' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on http://127.0.0.1:5001 Press CTRL+C to quit Typical Workflow 1) Export source inventory Run: python3 account_overview.py You will be prompted for: - Rackspace username - API key - account ID Output file: - _overview.csv - Example: 123456_overview.csv 2) Build flavor + block mapping Run: python3 flavor_mapper.py --inventory 123456_overview.csv --target-region IAD --target-flavor-catalog iad_target_flavors.csv Default outputs: - 123456_flavormap.csv - 123456_blockmap.csv - 123456_lbmap.csv (Cloud Load Balancer to Octavia member mapping) What this includes: - Source server flavor + target flavor match - Source OS metadata + recommended target image name - Editable cloud_init_user_data column (optional) for per-server cloud-init/user-data YAML passed during deploy generation - For boot-from-volume servers, falls back to Cinder volume_image_metadata when server image metadata is blank - Estimated target minimum costs (hour/day/month where rate exists) - Boot strategy (local boot vs boot-from-volume) - Data volume mapping details - Target selection based on the deployment region flavor catalog you provide - Cloud Load Balancer member mapping by matching CLB node private IPs (for example 10.x.x.x) to source server private_ips - Per-member mapping records for use by deploy generation to place mapped target servers behind Octavia LBs Target flavor catalog CSV: - Provide a region-specific CSV (or a multi-region CSV with a region column) containing at least flavor ID, name, RAM, vCPUs, and disk. - Supported column names include: - ID: flavor_id or target_flavor_id or id - Name: name or flavor_name or target_flavor_name - RAM: ram_mb or ram or memory_mb - vCPU: vcpus or vcpu or cpu - Disk: disk_gb or disk or root_disk_gb - Optional hourly rate: target_hourly_rate_usd or hourly_rate_usd or hourly_rate or price_per_hour_usd Optional DB-to-server conversion (opt-in): python3 flavor_mapper.py --inventory 123456_overview.csv --include-database-instances-as-servers When enabled: - database_instance rows are included in *_flavormap.csv as server deploy candidates - Mapping uses target flavors with local disk and prefers larger disk when multiple candidates have the same RAM - Converted rows are marked with source_resource_type=database_instance and a conversion_note Excluding VMs from deploy In 123456_flavormap.csv, set include_in_deploy to no for any VM to skip. Accepted include values (deploys VM): - yes, true, 1, y, on Anything else is treated as excluded. If excluded, dependent volume actions are also excluded. 3) Review in dashboard (optional) Open dashboard/index.html in your browser. Upload files in the page: - Account Overview CSV - Flavor Mapping CSV - Optional Block Storage Mapping CSV Dashboard features: - Service/resource tables - Flavor and volume inventory summaries - Source/target RAM and vCPU totals - Minimum target cost summaries - Simple source inventory charting - Optional LB Mapper CSV analysis: - LB rows and LB count summaries - Protocol breakdown/top protocol - LB node/member row counts and matched/include indicators - Searchable LB mapping table Optional: Run workflow from a web UI (no CLI flags) This project includes a workflow runner UI at: - workflow_dashboard/app.py Dependencies are installed via: - pip3 install -r requirements/requirements.txt Start the dashboard: python3 workflow_dashboard/app.py Open: - http://127.0.0.1:5001 - Includes tabs: OSPC2Flex, Visual Topology Builder, and Analyze CSV The web UI lets users upload/select CSVs and run: - account_overview.py - flavor_mapper.py - validate_migration_inputs.py - generate_project_deploy_script.py OSPC2Flex workflow highlights: - Step 2: **Flavor + Block + LB Mapping** - Step 3: Pre-deploy validation accepts Flavor Map + Block Map + optional LB Map CSV - Step 4: Deploy generation accepts optional LB Map CSV and passes it to deploy artifact generation Visual Topology Builder includes: - Drag-and-drop style topology canvas for OpenStack resources: - Network, Subnet, Router, Security Group, Instance, Volume, Load Balancer (Octavia) - Connection modeling between resources - Topology validation checks before deploy (required fields, invalid edge pairs, and structural issues) - Ordered deployment plan preview with generated command sequence - Import live resources from an authenticated OpenStack project to auto-build a topology - Import deployment shell scripts (by file path or pasted script text) and auto-build a best-effort visual topology preview - Instance-level floating IP toggle (no extra node clutter), with an in-node FIP indicator badge - Optional per-instance cloud-init/user-data YAML in Properties (user_data), passed as --user-data during server create - Instance default flavor in visual builder is gp.5.4.4 (update as needed per project/region) - Linux instances require key_name; Windows instances use auth_mode=windows_password with per-node admin_password - Direct deploy validates that referenced keypair names exist in the authenticated target project - Octavia load balancer support: - Connect LB node to a Subnet (VIP subnet) and backend Instance nodes (pool members) - Configure provider (ovn or amphora), protocol, listener port, member port, and pool algorithm - Platform note: ovn currently supports TCP listeners only; use amphora for HTTP/HTTPS - Optional LB floating IP toggle (needs_floating_ip) with selectable floating network - Deploy script creates LB + listener + pool and adds backend members - Save/load topology JSON files under uploads/topologies/ - Includes multiple sample topology files under uploads/topologies/ - uploads/topologies/TOPOLOGY_NOTES.md documents each sample topology purpose - Auto Layout control in the Visual Topology Builder for quick structured node placement - Script-import parser resolves common generated-script patterns for: - Volume attach relationships (instance <-> volume, attach) - Boot-from-volume relationships (instance <-> volume, boot) - Security-group associations from server create --security-group ... - Linux keypair metadata from script-level KEY_NAME=... fallback when inline --key-name is conditional - LB member intent (load balancer <-> backend instances) - Generate OpenStack CLI shell script from the designed topology - Optional direct deploy using OpenRC credentials (path or pasted content) - Optional API key/password override input for environments where OpenRC alone is not sufficient for non-interactive auth Deploy script runtime controls for topology deploy: - OS_CMD_TIMEOUT_SEC (default 180): timeout for each OpenStack CLI command to avoid indefinite hangs - RESOURCE_COLLISION_POLICY (default reuse): set to fail to stop immediately when a same-name resource already exists In the Flavor + Block + LB Mapping step, you can enable an option to include database_instance rows as server targets. The workflow UI requires selecting a Deployment Region before running flavor mapping. Regional target flavor catalogs are loaded automatically from uploads/flavors/ using region-based filenames (for example DFWFlavors.csv, SJCFlavors.csv, IADFlavors.csv). Current supported deployment regions in the workflow UI are DFW, SJC, and IAD. 4) Generate tenant deployment artifacts Run: python3 generate_project_deploy_script.py --flavor-mapping 123456_flavormap.csv --block-storage-mapping 123456_blockmap.csv Default outputs: - 123456_tenant_deploy.sh - 123456_tenant_deploy_rollback.sh - 123456_tenant_deploy_plan.csv - 123456_tenant_deploy_unresolved.csv - 123456_tenant_deploy_results.csv (written when the deploy script runs) - 123456_tenant_deploy_windows_credentials.csv (written when Windows instances are included) The generated script: - Creates/uses tenant network resources - Creates servers from mapped flavors/images - Creates boot volumes from image when boot-from-volume is required - Uses a configurable volume type (default: Performance) for boot/data volumes - Waits for server/volume readiness before attach operations - If *_lbmap.csv is present (or passed with --load-balancer-mapping), creates Octavia load balancers/listeners/pools and adds mapped backend members - Ensures the selected security group exists (auto-creates it if missing) - Uses keypair for Linux instances when provided; if a keypair is specified, it must already exist in target project - Generates random alphanumeric Windows passwords (12-16 chars, default 14) for Windows instances and writes a credentials CSV - Runs each planned action as an isolated step and records PASS/FAIL in a results CSV - Continues on step failures by default and exits non-zero at end if any step failed Rollback script: - *_tenant_deploy_rollback.sh is generated by default (disable with --no-rollback) - Deletes resources in dependency-safe reverse order: - Load balancers - Servers - Data volume detach/delete - Boot volumes created by boot-from-volume flow - Router/subnet/network teardown - Safety behavior: - Interactive mode requires typing DELETE - Non-interactive mode requires ROLLBACK_AUTO_APPROVE=1 Optional fail-fast behavior: python3 generate_project_deploy_script.py --flavor-mapping 123456_flavormap.csv --block-storage-mapping 123456_blockmap.csv --fail-fast 4a) Validate mapping inputs before deploy (recommended) Run: python3 validate_migration_inputs.py --flavor-mapping 123456_flavormap.csv --block-storage-mapping 123456_blockmap.csv Default output: - 123456_validation_report.csv Behavior: - Exit code 0 when no blocking errors are found - Exit code 2 when one or more ERROR findings are present - duplicate_server_name is reported as a WARN because duplicate included names can cause ambiguity and should be fixed before production deploys Duplicate-name handling in deploy generation: - If deploy generation is run with duplicate included server_name values, names are auto-suffixed to keep target names unique (for example: linux-server-1, linux-server-1-2, linux-server-1-3) - The workflow dashboard does not enforce validate-before-generate, so generation can still be run after a failed validation step 5) Execute deployment script bash 123456_tenant_deploy.sh 6) Verify deployed footprint against plan Run: python3 verify_post_deploy.py --plan 123456_tenant_deploy_plan.csv Default output: - 123456_post_deploy_report.csv Behavior: - Uses OpenStack CLI to check server/volume existence and key attributes - Exit code 0 when there are no FAIL checks - Exit code 2 when one or more FAIL checks are found Key Files Explained - *_overview.csv: Raw source inventory export - *_flavormap.csv: Server-level deployment mirror plan (flavors, images, costs, include flag) - Includes optional cloud_init_user_data column for per-instance cloud-init/user-data content - *_blockmap.csv: Volume-level deployment mirror plan (target actions, attach paths) - *_lbmap.csv: Load balancer/member mapping plan for Octavia build - *_tenant_deploy_plan.csv: Planned create/attach actions - *_tenant_deploy_unresolved.csv: Items skipped or requiring manual action - *_tenant_deploy_windows_credentials.csv: Generated Windows admin credentials for deployed Windows instances - *_tenant_deploy_rollback.sh: Reverse-order cleanup helper for generated resources - *_validation_report.csv: Pre-deploy validation findings (errors/warnings/info) - *_post_deploy_report.csv: Post-deploy verification results (pass/warn/fail) Notes - This toolset is tenant-focused and does not require global admin access. - This toolset mirrors source workloads by creating new target resources; it does not perform in-place platform migration. - Deployment assumes an external network (typically PUBLICNET). Security group can be overridden and is auto-created if missing. - Always review plan/unresolved CSVs before running deploy scripts in production.