/r/openstack

Photograph via snooOG

Subreddit dedicated to news and discussions about OpenStack, an open source cloud platform.

OpenStack is a collection of software which enables you to create and manage a cloud computing service similar to Amazon AWS or Rackspace Cloud. This subreddit exists as a place for posting information, asking questions, and discussing news related to this technology.

More information on OpenStack can be obtained via the following external resources:

  • Official Docs:
  • /r/openstack

    10,748 Subscribers

    2

    AIO + 3 NICs for sub-nets - segmentation problem

    OP got edited appx. 2 hours past its creation.

    One is about to deploy OS 2024.1 by means of Kolla Ansible yet in the AIO-form.

    The environment housing the cloud yet in former one's initial state (the day before adding cloud and its underpinning physical layer) is a quite small LAN in private household. This Lan has gateway to Internet.

    Cloud is planned to be placed in dedicated subnet. This subnet's gateway comprises NAT + firewall. Cloud will however not be allowed to communicate with devices those present in main LAN - one exception applies however. More details to mentioned exception later on in this post.

    Physical layer - the node will/can have 3 NICs. First of them for the connectivity cloud subnet to internet-gateway because OS deployment + maintenance will need to do download of software packages/repos from internet; same applies to tenant project to be set up on OS deployed; finally the maintenance of physical layer stack of software will need it too. Cloud has to be private - not to be visible/accessible from Internet. Physical connection cloud to house I-net gateway is an ethernet segment in dedicated VLAN, no other devices are present in this VLAN, only cloud and gateway. In end-effect the compound NAT+firewall is present twice (chained) in the connection cloud to I-net: (i) edge of cloud-subnet and (ii) the gateway house-to-Internet. Routing to other VLANs is planned to be not possible.

    Second NIC for openstack inter-services traffic (internal network), however connected to no ethernet segment - a stub, the purpose is to help OS internal network to get sufficient data transfer resources. Addition of this NIC because myself got inspired by materials addressing OpenStack with standard network segmentation presented and internal subnet as one among all common subnets of OpenStack-based clouds.

    Cloud physical node third NIC for one further dedicated subnet towards workstation. On latter one both OS admin as well as tenant admin/user are acting. Workstation's home is actually the house main Lan. In order the OS-admin and tenant-level roles can access the cloud the workstation will get 2nd NIC to be in the subnet with cloud 3rd NIC. Workstation is not allowed to do routing cloud to Internet gateway. Workstation has desktop-class firewall with typical configuration: all egress allowed, ingress blocked by default. Cloud physical layer has no firewall, OS-layer will be firewalled as Kolla Ansible default configuration does.

    So, above description presents the plan of segmentation. In next step I encounter two variables in kolla config: kolla_internal _vip_address and kolla_external_vip_address. One can find in OS-guide two possible modes to configure these: (i) single (ii) separate. When in single, the external, internal and admin API endpoints run all bound under one single address. In separate mode the admin and internal endpoints are bound on one address but isolated from external endpoints which run under own address. This is the concept which I afraid will possibly collide with my plan of network segmentation.

    I see possible problems/troubles because my segmentation plan foresees admin endpoints and internal ones on separate nic each while Kolla separate scheme will have those on one address.

    Is this a real problem? How to resolve it if any?

    Main lan and cloud's external vlan to i-net gateway are practically two parallel subnets docked to i-net gateway.

    1 Comment
    2025/01/30
    12:39 UTC

    1

    Online cinder disk extensions?

    Is it possible to perform disk/volume extensions on volumes attacked to a running instance?

    So I can do: $ cinder extend <disk guid> <size in gb>

    And the volume will be extended. But the instance/guest is unaware of this - I must power cycle the instance for the change to be seen by the instance OS. Probing virtio / scsi port does not detect any changes.

    This all seems to be merged ages ago:

    https://review.opendev.org/c/openstack/nova/+/454322

    https://review.opendev.org/c/openstack/devstack/+/480778

    https://review.opendev.org/c/openstack/tempest/+/480746

    https://review.opendev.org/c/openstack/cinder/+/454287

    https://review.opendev.org/c/openstack/cinder-specs/+/866718

    Are we missing something?

    I'm just a cloud janitor focused on having our stuff go wroom wroom without deep access in our infra.

    running on Ussuri

    Cheers

    8 Comments
    2025/01/30
    09:40 UTC

    2

    Security groups not working if applied during instance creation

    Hi,

    I have a 2024.2 openstack deployed using kolla ansible on ubuntu 24.04LTS. I created a simple security group (called MySec) that basically allows all inbound and outbound traffic to the instance. I tried to create an instance from the CLI with the following command

        openstack server create \
        --flavor m1.tiny \
        --boot-from-volume 1 \
        --image cirros-0.6.2 \
        --nic port-id=PortID \
        --security-group MySec \
        --nic net-id=ExternalNetwork \
        --security-group MySec \
        MyVM

    At first, I noticed that the default security group had also been added. I removed it using openstack server remove security group MyVM default But even after this, I couldn't ping my instance. I then tried to remove my security group and add it once again. After it, the network connectivity started working without any problems.

    Is there something I am missing during the instance creation, or should security groups be applied later once the instance is created?

    4 Comments
    2025/01/27
    18:28 UTC

    3

    Speed Up Your DevStack Setup: Replace Amphora with a Pre-Built Image

    When I first tried setting up OpenStack with DevStack, the installation process drove me crazy. The default Amphora image install took forever because of the mirroring process. I found a way to fix this by swapping out the default Amphora image with a pre-built one from the OSISM OpenStack Octavia Amphora Image repository. It saved me a ton of time, and I wanted to share this simple fix with you guys!

    This guide explains the importance of using the correct amphora tag, the required settings in local.conf, and how to choose the right topology for your environment.

    Why Replace the Default Amphora Image?

    1. Save Time: Avoid the slow mirrored install process.
    2. Flexibility: Use a pre-built image tailored for Octavia.
    3. Future Use: The amphora tag is essential for Octavia to identify the image automatically.

    Important: The Amphora Image Tag

    When uploading the custom image, the amphora tag is crucial. Octavia relies on this tag to find the correct image. Without it, the controller cannot launch Amphora instances.

    Steps to Replace Amphora

    1. Download the Pre-Built Amphora Image

    Clone the OSISM repository:

    git clone <https://github.com/osism/openstack-octavia-amphora-image.git> cd openstack-octavia-amphora-image

    Download the image

    wget <https://artifacts.osism.tech/octavia-amphora-image/octavia-amphora-haproxy-2024.2.qco>

    2. Configure local.conf

    Before running stack.sh, update your local.conf file with the necessary settings for Octavia and Amphora.

    Open local.conf:

    nano ~/devstack/local.conf

    Add the following configuration for Octavia:

    [[local|localrc]]
    
    # Enable Octavia services
    enable_plugin octavia 
    enable_plugin octavia-dashboard 
    LIBS_FROM_GIT+=python-octaviaclient
    
    # Disable Amphora image build
    DISABLE_AMP_IMAGE_BUILD=True
    
    # Octavia-specific configuration
    [[post-config|$OCTAVIA_CONF]]
    [controller_worker]
    amp_image_tag = amphora
    amp_flavor_id = 3
    amp_boot_network_list = <network-id>
    loadbalancer_topology = SINGLE  # Use SINGLE for single-node buildshttps://opendev.org/openstack/octaviahttps://opendev.org/openstack/octavia-dashboard

    Replace <network-id> with the ID of your public or provider network. If you're setting up a multi-node environment, consider using ACTIVE_STANDBY instead of SINGLE for loadbalancer_topology.

    3. Upload the Custom Amphora Image

    Add the image to OpenStack:

    openstack image create \
    --disk-format qcow2 \
    --container-format bare \
    --file octavia-amphora-haproxy-2024.2.qcow2 \
    --tag amphora \
    "Custom-Amphora-Image"

    Verify the image is tagged and available:

    openstack image list --tag amphora

    The amphora tag is mandatory. Octavia uses this tag to locate the image during load balancer provisioning.

    4. Run DevStack

    1. Run the stack.sh script:

    ./stack.sh

    Verify the Octavia service is active:

    openstack loadbalancer list

    5. Test the Setup

    Create a load balancer:

    openstack loadbalancer create --name my-lb --vip-subnet-id <subnet-id>

    Check that Amphora instances are launched:

    openstack server list --name amphora

    Verify logs for errors:

    sudo journalctl -u devstack@o-* sudo cat /var/log/octavia/octavia.log

    Choosing the Right Load Balancer Topology

    Single Node (SINGLE):

    • Best for development or single-node setups.
    • Only one Amphora instance is created per load balancer.

    Active-Standby (ACTIVE_STANDBY):

    • Suitable for multi-node production environments.
    • Two Amphora instances are created for high availability.

    To switch, update the loadbalancer_topology setting in both local.conf and /etc/octavia/octavia.conf.

    Conclusion

    Replacing the default Amphora image with a pre-built one is a straightforward way to speed up DevStack setup and avoid time-consuming mirrored installs. By tagging the image with amphora and configuring Octavia correctly in local.conf, you ensure a smooth integration. Adjust the loadbalancer_topology to match your deployment needs, and you'll have a functional load balancer in no time.

    2 Comments
    2025/01/27
    11:11 UTC

    1

    Diskimage builder question

    Hello everyone, I am trying to configure custom images using diskimage-builder. I had some problems with syntax and can't quite figure out how it should be formatted. However I was wondering if it is possible to build an image that does a RAID1 at boot using this tool. Because if it can't then I need to find something else to build the images. Thanks in advance

    2 Comments
    2025/01/26
    20:17 UTC

    2

    Security Groups not attaching to instances

    https://preview.redd.it/m5egbswyryee1.png?width=309&format=png&auto=webp&s=7e40d8ce57ccd7dd4fa6d0ed14f1a20fe597291d

    In my openstack multinode setup i can provision instances but when I select security groups they are not attaching to instances. I can see the available security groups in security groups section also. can someone help me with this please.

    5 Comments
    2025/01/24
    15:58 UTC

    5

    Glance with Cinder Backend not using internal API-Endpoints for inter-service communication

    Hi People,

    I'm again pulling my hair out over Openstack.

    Openstack is deployed with Kolla-ansible (19.0.1), Openstack version 2024.2

    I have a Cinder-Backend with the Huawei Fibre-Channel driver. The Driver generally works, I can provison, attach and write to volumes via FC.

    Glance also works with local file storage. Now the task is to also store images in Cinder. Should be an easy task, or so I thought...

    The current problem where I'm stuck is that I'm telling glance-api specifically to request the internalAPI-Endpoint from the catalogue and it keeps accessing the external one, which it can't because its blocked. I'd rather not unblock it in the firewall and instead properly fix whats wrong.

    Glance Container is stuck in a restart loop, never gets healthy: 2025-01-22 20:36:01.248 7 DEBUG glance_store._drivers.cinder.store [-] Cinderclient connection created for user glance using URL: http://100.121.3.250:5000/v3. get_cinderclient /var/lib/kolla/venv/lib/python3.12/site-packages/glance_store/_drivers/cinder/store.py:648

    and

    ERROR: Request to https://<pub_api_endpoint>:8776/v3/695b9c52141149a4b57a471ef882cfbe/types?name=__DEFAULT__&is_public=None timed out

    Here it should use the internal Endpoint.

    So it goes to the internal identitiy service api-endpoint to retreive the catalogue, but then tries to talk to cinder-api via the external endpoint.

    According to the docs, the option cinder_catalog_info HERE should be exactly what I need. But when setting and rolling out, it does exactly nothing, it always uses the public endpoint.

    Confs

    # cat /etc/kolla/config/glance/glance-api.conf
    [DEFAULT]
    stores = file, cinder
    # next line is for debugging only and not supposed to be configured in production
    #show_multiple_locations = True
    show_image_direct_url = False
    # the next lines only work in conjunction with image_upload_use_internal_tenant = True in cinder.conf
    enabled_backends = huawei_backend:cinder
    
    debug = True
    
    [glance_store]
    default_backend = huawei_backend
    
    [keystone_authtoken]
    service_token_roles_required = True
    
    [huawei_backend]
    store_description = "FC Storage Array"
    
    # !!! This should be the option, which solves out issues
    # Some docs also say this should be unter [DEFAULT] which doesnt make a difference
    cinder_catalog_info = volumev3::internalURL
    
    # Alternatively tried the line below, no dice
    # cinder_endpoint_template = http://100.121.3.250:8776/v3/%(tenant)s
    cinder_store_auth_address = http://100.121.3.250:5000/v3
    cinder_store_user_name = glance
    cinder_store_password = <glance_keystone_pw>
    cinder_store_project_name = service
    cinder_volume_type = __DEFAULT__

    Any help would be appreciated. Thanks!

    1 Comment
    2025/01/22
    20:47 UTC

    1

    How to update kolla images correctly

    I managed to update kolla images successfully by updating kolla ansible repo first but what if this step makes me get the latest images but not the LTS I need someone to explain the correct update procedure to me

    1 Comment
    2025/01/21
    20:08 UTC

    3

    Does the compute node need an external network interface?

    In kolla-ansible:

    When compute nodes and control nodes use different interfaces,

    you need to comment out "api_interface" and other interfaces from the globals.yml and specify like below:

    #compute01 neutron_external_interface=eth0 api_interface=em1 tunnel_interface=em1 .

    This is my configuration:

    controller neutron_external_interface=eth0 api_interface=em1 tunnel_interface=em1

    compute01 api_interface=em1 tunnel_interface=em1

    compute01 lacks neutron_external_interface, and the external network is in the network node. I feel that the compute node does not need an external network interface.

    2 Comments
    2025/01/21
    08:32 UTC

    1

    Kolla-Ansible post-deploy command problem

    Hi everyone, I followed the latest version guide of installing kolla-ansible all in one. I have done the deployment steps (which include kolla-ansible bootstraps-server, prechecks and deploy) . but then in the run openstack section. I got this error even though I am quite sure that I followed the step carefully.

    problem on command: kolla-ansible post-deploy

    Does anyone have a way on how to solve this problem?

    https://preview.redd.it/3psvgg83laee1.png?width=1317&format=png&auto=webp&s=b8456b4f0391dc66f035873cbe8daf55b6da8914

    7 Comments
    2025/01/21
    06:37 UTC

    0

    "swift stat" command not working!!!

    iam using openstack caracal for the swift after i followed all the configuration steps in the docs i arrived to the verfication step and i get this error (attached picture) when i launch the swift stat command.

    if you can help me please leave a comment.

    0 Comments
    2025/01/20
    16:35 UTC

    1

    Update kolla Ansible images and containers

    I have kolla Ansible installed but i need to update the images and containers to latest images to fix some issues i encounter with older images also is it possible to update specific images only

    9 Comments
    2025/01/20
    14:59 UTC

    0

    Need help in setting up a network of Physical as well as openstack cirros instances

    So, I am very new to Openstack & don't really have much idea about setting up the IP addresses.

    I am using devstack to install an environment of openstack.

    My current setup includes A TP-Link router (AX3000 Wifi 6), One WIndows PC, a Raspberry Pi and finally an ubuntu machine on which I want to setup openstack on.

    The TP-Link router has a DHCP server setup to give out IPs starting from 100.64.0.2' to 100.64.255.253',
    The router itself has an IP of 100.64.0.1. The windows PC has IP 100.64.0.3. The raspberry Pi has an IP of 100.64.0.4 and the ubuntu machine has an IP of 100.64.0.5

    The idea is to setup the openstack environment in such a way so that the IPs 100.64.0.10' to 100.64.0.253 are allocated as floating IPs and can be handed out to the instances that may be created in openstack. (I want to communication between the windows, raspberry pi and the instances)

    I have attached a photo to show what I generally want to achieve. The problem is whenever I run stack.sh',
    the ubuntu machine looses internet access altogether and it cannot be contacted from the windows or pi.

    I have tinkered with the local.conf' file and nothing seems to help as I could not find any samples, I have torn down the environment and rebuilt the entire thing 10s of times right now.

    Seems like I am missing a critical configuration. Is this even possible?

    https://preview.redd.it/fuz5859q95ee1.png?width=1689&format=png&auto=webp&s=eb1162f59d3b634a5fc1119c73d6b1fcb258a9df

    1 Comment
    2025/01/20
    12:42 UTC

    1

    kolla-ansible - reconfiguring services?

    Hello - second post! :D

    As per my post below, had issues getting microstack to work, tried kolla-ansible. Way more complex, but amazingly I did end up with a working openstack deployment.

    However, I wanted to use cinde for glance storage. The globals.yaml file does not have any variables for glance to use cinder.

    Modified the configuration on /etc/kolla/glance-api and typed kolla-ansible reconfigure. That replaced my changes to the /etc/kolla with values derived from the globals.yaml file. I redid the configuration and restarting the container seemed to make openstack image store list return cinder

    openstack --os-cloud=kolla-admin image stores list

    +--------+-------------+---------+

    | ID | Description | Default |

    +--------+-------------+---------+

    | http | None | None |

    | cinder | None | True |

    +--------+-------------+---------+

    but on reboot, that fails

    openstack image stores list

    Failed to contact the endpoint at http://192.168.1.99:9292 for discovery. Fallback to using that endpoint as the base url.

    Failed to contact the endpoint at http://192.168.1.99:9292 for discovery. Fallback to using that endpoint as the base url.

    The image service for kolla-admin:RegionOne exists but does not have any supported versions.

    So have a couple of questions:

    1. Is there a right way to edit the kolla-ansible generated configs and have the services pick up the changes and well, continue working?
    2. Is this even possible in kolla-ansible? or maybe the aim of kolla-ansible is to ONLY configure thru globals.yaml and whatever that offers?
    3. Is there a distribution that will do what I hope? That is, make deployment for personal and learning use relatively simple, but also allow to change stuff as I learn (like for example have the desire to use cinder to store images)?

    Thinking of trying Atmosphere from VEXX.

    Thanks in advance!

    4 Comments
    2025/01/19
    21:49 UTC

    3

    Need help to fix neutron network issue

    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent neutron_lib.exceptions.ProcessExecutionError: Exit code: 2; Cmd: ['ip', 'netns', 'exec', 'qrouter-dd163263-a329-4854-9b1f-53bee11e4754', 'ip6tables-restore', '-n']; Stdin: # Generated by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent *filter
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -D neutron-l3-agent-scope 1
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent COMMIT
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent # Completed by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent # Generated by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent *mangle
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :FORWARD - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :INPUT - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :OUTPUT - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :POSTROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :PREROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-FORWARD - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-INPUT - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-OUTPUT - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-POSTROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-PREROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-scope - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I FORWARD 1 -j neutron-l3-agent-FORWARD
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I INPUT 1 -j neutron-l3-agent-INPUT
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I OUTPUT 1 -j neutron-l3-agent-OUTPUT
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I POSTROUTING 1 -j neutron-l3-agent-POSTROUTING
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I PREROUTING 1 -j neutron-l3-agent-PREROUTING
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I neutron-l3-agent-PREROUTING 1 -j neutron-l3-agent-scope
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I neutron-l3-agent-PREROUTING 2 -m connmark ! --mark 0x0/0xffff0000 -j CONNMARK --restore-mark --nfmask 0xffff0000 --ctmask 0xffff0000
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I neutron-l3-agent-PREROUTING 3 -d fe80::a9fe:a9fe/128 -i qr-+ -p tcp -m tcp --dport 80 -j MARK --set-xmark 0x1/0xffff
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent COMMIT
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent # Completed by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent # Generated by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent *nat
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :PREROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-PREROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I PREROUTING 1 -j neutron-l3-agent-PREROUTING
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent COMMIT
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent # Completed by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent # Generated by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent *raw
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :OUTPUT - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :PREROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-OUTPUT - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent :neutron-l3-agent-PREROUTING - [0:0]
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I OUTPUT 1 -j neutron-l3-agent-OUTPUT
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent -I PREROUTING 1 -j neutron-l3-agent-PREROUTING
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent COMMIT
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent # Completed by iptables_manager
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent ; Stdout: ; Stderr: ip6tables-restore v1.8.7 (nf_tables): unknown option "--set-xmark"
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent Error occurred at line: 26
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent Try `ip6tables-restore -h' or 'ip6tables-restore --help' for more information.
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent 
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent 
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent During handling of the above exception, another exception occurred:
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent 
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent Traceback (most recent call last):
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/agent/l3/agent.py", line 851, in _process_routers_if_compatible
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     self._process_router_if_compatible(router)
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/agent/l3/agent.py", line 638, in _process_router_if_compatible
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     self._process_added_router(router)
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/agent/l3/agent.py", line 651, in _process_added_router
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     with excutils.save_and_reraise_exception():
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     self.force_reraise()
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     raise self.value
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/agent/l3/agent.py", line 649, in _process_added_router
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     ri.process()
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/common/utils.py", line 184, in call
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     with excutils.save_and_reraise_exception():
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 227, in __exit__
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     self.force_reraise()
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     raise self.value
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/common/utils.py", line 182, in call
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     return func(*args, **kwargs)
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/agent/l3/router_info.py", line 1307, in process
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     self.process_address_scope()
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/decorator.py", line 232, in fun
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     return caller(func, *(extras + args), **kw)
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/common/coordination.py", line 78, in _synchronized
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     return f(*a, **k)
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/agent/l3/router_info.py", line 1275, in process_address_scope
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     with self.iptables_manager.defer_apply():
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/usr/lib/python3.10/contextlib.py", line 142, in __exit__
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     next(self.gen)
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent   File "/var/lib/kolla/venv/lib/python3.10/site-packages/neutron/agent/linux/iptables_manager.py", line 444, in defer_apply
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent     raise l3_exc.IpTablesApplyException(msg)
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent neutron_lib.exceptions.l3.IpTablesApplyException: Failure applying iptables rules
    2025-01-18 16:42:10.061 21 ERROR neutron.agent.l3.agent 
    2025-01-18 16:42:10.062 21 WARNING neutron.agent.l3.agent [-] Hit retry limit with router update for dd163263-a329-4854-9b1f-53bee11e4754, action 3
    2025-01-18 16:42:10.820 21 ERROR neutron.agent.linux.utils [-] Exit code: 2; Cmd: ['ip', 'netns', 'exec', 'qrouter-dd163263-a329-4854-9b1f-53bee11e4754', 'arping', '-U', '-I', 'qg-c369fb8b-02', '-c', 1, '-w', 2, '172.16.1.47']; Stdin: ; Stdout: ARPING 172.16.1.47 from 172.16.1.47 qg-c369fb8b-02
    Sent 1 probes (1 broadcast(s))
    Received 0 response(s)
    ; Stderr: arping: recvfrom: Network is down
    
    2025-01-18 16:42:10.828 21 INFO neutron.agent.linux.ip_lib [-] Failed sending gratuitous ARP to 172.16.1.47 on qg-c369fb8b-02 in namespace qrouter-dd163263-a329-4854-9b1f-53bee11e4754: Exit code: 2; Cmd: ['ip', 'netns', 'exec', 'qrouter-dd163263-a329-4854-9b1f-53bee11e4754', 'arping', '-U', '-I', 'qg-c369fb8b-02', '-c', 1, '-w', 2, '172.16.1.47']; Stdin: ; Stdout: ARPING 172.16.1.47 from 172.16.1.47 qg-c369fb8b-02
    Sent 1 probes (1 broadcast(s))
    Received 0 response(s)
    ; Stderr: arping: recvfrom: Network is down
    
    2025-01-18 16:42:10.828 21 INFO neutron.agent.linux.ip_lib [-] Interface qg-c369fb8b-02 or address 172.16.1.47 in namespace qrouter-dd163263-a329-4854-9b1f-53bee11e4754 was deleted concurrently

    I have deployed openstack multinode with controller, compute, network nodes. I can login to horizon and I can create instances but the thimg is I cant access to internet in those instances. so I checked network namespaces in network node and I noticed that qrouter namespace delete immediately once it created. and i checkd the L3 agent log and I attached that in above. Please if someone know what need to be done let me know. Thanks

    6 Comments
    2025/01/18
    16:53 UTC

    2

    What is it that openstack zun+other core openstack services cannot do that k8s does?

    Like I deployed zun with kolla and its just been awesome!. When combined with Heat aodh and gnocchi it autoscales it can do anything. Even complicated applications can be done. Like its just awesome! .

    So tell me:
    What are the features that k8s offers that openstack zun does not?

    1 Comment
    2025/01/17
    19:05 UTC

    1

    Microstack 2024.1 beta on Ubuntu Server 24.04 installation woes

    Hello - First post!

    Attempting to install Microstack on an Ubuntu 24.04 Server box, physical. Decided on Microstack as the distribution to try because the documentation at

    https://canonical.com/microstack/docs/single-node-guided

    Makes it seem painless. However, process fails bootstrapping the cluster with the error:

    OpenStack APIs IP ranges (172.16.1.201-172.16.1.240): 192.168.10.180-192.168.10.189
    Error: No model openstack-machines found

    Last step seems to be "migrating openstack-machines model to sunbeam-controller".

    Attempting to do that operation manually, I get

    juju migrate --debug --show-log --verbose  openstack-machines sunbeam-controller
    11:28:26 INFO  juju.cmd supercommand.go:56 running juju [3.6.1 cdb5fe45b78a4701a8bc8369c5a50432358afbd3 gc go1.23.3]
    11:28:26 DEBUG juju.cmd supercommand.go:57   args: []string{"/snap/juju/29241/bin/juju", "migrate", "--debug", "--show-log", "--verbose", "openstack-machines", "sunbeam-controller"}
    11:28:26 INFO  juju.juju api.go:86 connecting to API addresses: [10.180.222.252:17070]
    11:28:26 DEBUG juju.api apiclient.go:1035 successfully dialed "wss://10.180.222.252:17070/api"
    11:28:26 INFO  juju.api apiclient.go:570 connection established to "wss://10.180.222.252:17070/api"
    11:28:26 DEBUG juju.api monitor.go:35 RPC connection died
    11:28:26 INFO  juju.juju api.go:86 connecting to API addresses: [192.168.1.180:17070]
    11:28:26 DEBUG juju.api apiclient.go:1035 successfully dialed "wss://192.168.1.180:17070/api"
    11:28:26 INFO  juju.api apiclient.go:570 connection established to "wss://192.168.1.180:17070/api"
    11:28:26 DEBUG juju.api monitor.go:35 RPC connection died
    11:28:26 INFO  juju.juju api.go:86 connecting to API addresses: [10.180.222.252:17070]
    11:28:26 DEBUG juju.api apiclient.go:1035 successfully dialed "wss://10.180.222.252:17070/api"
    11:28:26 INFO  juju.api apiclient.go:570 connection established to "wss://10.180.222.252:17070/api"
    11:28:26 INFO  cmd migrate.go:152 Migration started with ID "3237db61-5410-4a6e-8324-4e97ec608dd3:2"
    11:28:26 DEBUG juju.api monitor.go:35 RPC connection died
    11:28:26 INFO  cmd supercommand.go:556 command finished

    Because of the messages:

    11:28:26 DEBUG juju.api monitor.go:35 RPC connection died

    I suspect I am NOT setting up networking properly. The link https://canonical.com/microstack/docs/single-node-guided indicates to use two networks but gives little information on how they should be setup. My netplan is:

    network:
      ethernets:
        enxcc483a7fab23:
          dhcp4: true
        enxc8a362736325:
          dhcp4: no
      vlans:
        vlan.20:
          id: 20
          link: enxc8a362736325
          dhcp4: true
          dhcp4-overrides:
            use-routes: false
          routes:
            - to: default
              via: 192.168.10.1
              table: 200
            - to: 192.168.10.0/24
              via: 192.168.10.1
              table: 200
          routing-policy:
           - from: 192.168.10.0/24
             table: 200
      version: 2
      wifis: {}

    I have tried use both networks in all the roles; my pastes above reflect my last try with the 192.168.10.0 network as controller but also tried 192.168.1.0 . The VLAN for the 10 network is defined on the router and it seems to be bridged properly with the VLAN for the 168.1 network which is untagged.

    I posted over at the canonical forum, but there seems to be so little traffic that seems unlikely to get a reply.

    Thanks so much in advance

    1 Comment
    2025/01/17
    12:23 UTC

    2

    Trying to back up controllers

    Using Kolla Ansible 2023.1 with a pair of virtual controllers. I'd like to simply shut down one of the two controllers, back it up, turn it back on, wait a bit, then turn the other controller off and repeat the process. But, the process takes awhile (I made the VMs large in size as my glance images are all stored locally and some of those can be large), and it seems to me like every time I power a controller back on, something goes awry.

    Sometimes I have to use the mariadb_recovery command to get everything back together, or sometimes it's something different, like the most recent time, where I discovered that the nova-api container had crashed while the second controller was being backed up. One way or another, it seems like bringing down a controller for a bit to back it up always causes some sort of problem.

    How does everyone else handle this? Thanks!

    4 Comments
    2025/01/14
    20:29 UTC

    5

    Hello everyone, can OpenStack routing only have one internal network and one external network? I want an internal network to correspond to multiple network segments of external networks to implement EIP, how can this be achieved?

    4 Comments
    2025/01/13
    14:47 UTC

    0

    Snapshot compression level

    I am using LVM in Cinder and iSCSI for volumes. How can I store snapshots in a compressed format when they are taken? I noticed that a new volume is created for the snapshot, but I want it to be stored in a compressed format.

    1 Comment
    2025/01/11
    19:00 UTC

    2

    Help Needed: IPsec VPN Setup Issue with Traffic Routing in OpenStack

    Hi everyone,

    I’m working on setting up an IPsec VPN in my OpenStack environment, but I’m running into an issue with routing traffic from other VMs in the subnet through the VPN server. Here's the summary of my setup and the problem I’m facing:

    Setup Overview:

    Issue:

    • The IPsec VM (172.16.4.80) successfully establishes the tunnel, and I can ping the destination from this VM using the tunnel.
    • However, traffic from the Application VM (172.16.4.26) fails when routed through the IPsec VM (172.16.4.80) to the destination.

    What I've Tried:

    • Verified IP forwarding is enabled on the IPsec VM.
    • Ensured the tunnel is established and functional (from the IPsec VM).
    • Checked security groups and firewall rules to ensure traffic is allowed.
    • Investigated whether the centralized SNAT (172.16.4.55) is interfering with traffic flow.

    Questions:

    1. Is the network:router_centralized_snat causing the traffic to bypass the IPsec VM?
    2. Do I need to disable port security or reconfigure the router interfaces for proper routing?
    3. How can I ensure traffic from 172.16.4.26 routes correctly through the IPsec VM (172.16.4.80) and uses the tunnel?

    Any advice or suggestions would be greatly appreciated!

    1 Comment
    2025/01/11
    08:03 UTC

    4

    Confused about deploying my own Openstack deployment with TripleO

    So i just took on a new job which requires me to administer Openstack. Since it is such a niche skill my previous RHEL experience was deemed enough with the aim I learn the Openstack part while on the job.

    I would rather deploy my own cloud from the ground up to get a true understanding of all the components involved and their config. The Openstack cloud my company has going is based on the Tripleo Ansible install.

    The documentation seems so disparate for openstack as a whole so it's not as straightforward as I hoped. Is there a guide I can follow to set up my own install for lab purposes, what method for getting to grips with RHOSP would you recommend for my case?

    5 Comments
    2025/01/09
    17:12 UTC

    2

    Backup encrypted volumes

    Does Your backup software allow do backups for encrypted volumes ?

    1 Comment
    2025/01/09
    08:36 UTC

    1

    Remove automatically interface

    I have several instances where the interface sometimes gets removed automatically, and I have to add it again.
    Do you have any experience with this?
    I'm working in a Kolla environment with OVN, and I have also installed firewall and VPN services.

    
    [DEFAULT]
    debug = False
    log_dir = /var/log/kolla/neutron
    use_stderr = False
    bind_host = 172.16.1.1
    bind_port = 9696
    api_paste_config = /etc/neutron/api-paste.ini
    api_workers = 5
    rpc_workers = 3
    rpc_state_report_workers = 3
    state_path = /var/lib/neutron/kolla
    core_plugin = ml2
    service_plugins = firewall_v2,flow_classifier,qos,segments,sfc,trunk,vpnaas,ovn-router
    transport_url = rabbit://openstack:password@172.16.1.1:5672//
    dns_domain = [REDACTED]
    external_dns_driver = designate
    ipam_driver = internal
    [nova]
    auth_url = http://172.16.1.254:5000
    auth_type = password
    project_domain_id = default
    user_domain_id = default
    region_name = ovh-vrack
    project_name = service
    username = nova
    password = password
    endpoint_type = internal
    cafile = /etc/ssl/certs/ca-certificates.crt
    [oslo_middleware]
    enable_proxy_headers_parsing = True
    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp
    [agent]
    root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
    [database]
    connection = mysql+pymysql://neutron:password@172.16.1.254:3306/neutron
    connection_recycle_time = 10
    max_pool_size = 1
    max_retries = -1
    [keystone_authtoken]
    service_type = network
    www_authenticate_uri = http://172.16.1.254:5000
    auth_url = http://172.16.1.254:5000
    auth_type = password
    project_domain_id = default
    user_domain_id = default
    project_name = service
    username = neutron
    password = password
    cafile = /etc/ssl/certs/ca-certificates.crt
    region_name = ovh-vrack
    memcache_security_strategy = ENCRYPT
    memcache_secret_key = password
    memcached_servers = 172.16.1.1:11211
    [oslo_messaging_notifications]
    transport_url = rabbit://openstack:password@172.16.1.1:5672//
    driver = messagingv2
    topics = notifications
    [oslo_messaging_rabbit]
    heartbeat_in_pthread = false
    rabbit_quorum_queue = true
    [sfc]
    drivers = ovs
    [flowclassifier]
    drivers = ovs
    [designate]
    url = http://172.16.1.254:9001/v2
    auth_uri = http://172.16.1.254:5000
    auth_url = http://172.16.1.254:5000
    auth_type = password
    project_domain_id = default
    user_domain_id = default
    project_name = service
    username = designate
    password = password
    allow_reverse_dns_lookup = True
    ipv4_ptr_zone_prefix_size = 24
    ipv6_ptr_zone_prefix_size = 116
    cafile = /etc/ssl/certs/ca-certificates.crt
    region_name = ovh-vrack
    [placement]
    auth_type = password
    auth_url = http://172.16.1.254:5000
    username = placement
    password = password
    user_domain_name = Default
    project_name = service
    project_domain_name = Default
    endpoint_type = internal
    cafile = /etc/ssl/certs/ca-certificates.crt
    region_name = ovh-vrack
    [privsep]
    helper_command = sudo neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper
    
    
    
    
    
    
    [ml2]
    type_drivers = flat,vlan,vxlan,geneve
    tenant_network_types = vlan
    mechanism_drivers = ovn
    extension_drivers = qos,port_security,subnet_dns_publish_fixed_ip,sfc
    [ml2_type_vlan]
    network_vlan_ranges =
    [ml2_type_flat]
    flat_networks = physnet1
    [ml2_type_vxlan]
    vni_ranges = 1:1000
    [ml2_type_geneve]
    vni_ranges = 1001:2000
    max_header_size = 38
    [ovn]
    ovn_nb_connection = tcp:172.16.1.1:6641
    ovn_sb_connection = tcp:172.16.1.1:6642
    ovn_metadata_enabled = True
    enable_distributed_floating_ip = False
    ovn_emit_need_to_frag = True
    
    0 Comments
    2025/01/08
    20:35 UTC

    0

    Why Private Cloud with OpenStack is the Future of IT Infrastructure! 🌐

    Are you ready to take control of your IT environment while ensuring scalability, security, and cost efficiency? OpenStack is revolutionizing private cloud infrastructure for businesses worldwide. Here’s why it’s a game-changer:

    🔒 Enhanced Security: Complete control over your data with advanced encryption and compliance features.
    📈 Unmatched Scalability: Grow your infrastructure effortlessly as your business expands.
    ⚙️ Customizable Solutions: Tailor your cloud to meet your specific needs, thanks to OpenStack’s modular design.
    💡 Cost Efficiency: Open-source means no licensing fees and maximum ROI for your private cloud setup.
    🤝 Hybrid Cloud Ready: Seamless integration with public clouds for a robust hybrid cloud strategy.

    🌟 Future-proof your IT with OpenStack and unlock endless possibilities. Ready to build your private cloud? Let’s make it happen!

    👉 Start your journey with Accrets.com — your trusted partner in deploying secure and scalable OpenStack private cloud solutions.

    💬 Tell us: What’s your top priority for IT infrastructure in 2025? Let’s discuss in the comments! 👇

    1 Comment
    2025/01/08
    06:48 UTC

    4

    OpenStack Lab configuration suggestions (how should I deploy?)

    I have the following hardware in my lab and I am willing to do whatever I need to create/deploy OpenStack on an 8-node cluster. I have three managed switches in-front and each node has at least three NIC ports (although they are all only 1GBe, but LAG groups could be created for performance), and if suggested I have several additional 4-port NICs I can add.

    Regardless, I'm open to any and all suggestions on how and where to deploy the various services that make up a robust OpenStack lab. My further goal is to then deploy OpenShift or some form of managed Kubernetes on top of that.

    Thanks in advance for the consideration:

    https://preview.redd.it/l0nie2quvnbe1.png?width=852&format=png&auto=webp&s=15c111a9a81dbd468b42472414fa26d3cf8dfd67

    Small note I do have several USB sticks and external drives available to use as boot devices. In fact Node 4 currently boots from an external drive, and Nodes 5 and 6 boot from RHEL 8 USB sticks.

    0 Comments
    2025/01/07
    23:49 UTC

    Back To Top