/r/openstack
Subreddit dedicated to news and discussions about OpenStack, an open source cloud platform.
OpenStack is a collection of software which enables you to create and manage a cloud computing service similar to Amazon AWS or Rackspace Cloud. This subreddit exists as a place for posting information, asking questions, and discussing news related to this technology.
More information on OpenStack can be obtained via the following external resources:
/r/openstack
Is it possible to rebuild an existing Openstack environment from scratch from a database backup using Kolla Ansible?
Hi.
I have a bit of a problem.
My workplace are running vmware and nutanix workloads today and we have been given a pretty steep savings demand, like STIFF numbers or we are out.
So i have been looking at openstack as an alternernative and i got kinda stuck trying to guess what kind of hardware bill i would create, in the architecture phase.
I have been talking a little with canonical a few years back but did not get the budget then. "We have vmware?"
My problem is that i want to avoid the HCI track since it has caused us nothing but trouble in Nutanix and im getting nowhere in trying to figure out what services can be clustered and which cant.
I want everything to be redundant, so theres like three times as many, but maybe smaller, nodes for everything.
I want to be able to scale compute and storage horisontally over time and also open up for a GPU cluster, if anyone pays for it.
This was not doable in nutanix with HCI, for obvious reasons...
As far as i can tell i need a small node for cluster management, separate compute nodes and storage nodes to fullfill the projected needs.
It's whats left that i cant really get my head around, networking, UI and undercloud stuff....
Should i clump them all together or keep them separated? Together is probably easier to manage and understand but perhaps i need more powerful individual nodes.
If separate, how many little nodes/clusters would i need?
The docs are very....vague....about how to best do this and i dont know, i might be stark raving mad to even think this is a good idea?
Any thoughts? Pointers?
Should i shut up and embrace HCI?
(I couldn’t find the rules for this sub to see if it was ok)
We’re recruiting for a Senior Cloud Development engineer at Graphcore. Come help us build the next generation of our development clouds!
The link is here:
https://www.openstack.org/community/jobs/view/3570/senior-engineer-:-cloud-development
Feel free to ask me any questions about the role
Hi guys, I deployed Openstack using Kolla-Ansible and I'm trying to create a cluster template but it doesn't let me. In Horizon just says: "Error: unable to create cluster template". Which services are required in order to setup Magnum?
Hey everyone, New openstacker here
I have recent installed openstack to my homelab to have a play around and learn the ins and outs.
i used openstack-ansible version 2024.2 AIO install via LXC containers with the addition of Magnum and Trove added to the scenario list
I am currently playing around with magnum trying to setup a small k8s cluster following the guide here
https://docs.openstack.org/magnum/2024.2/install/launch-instance.html
I seem to be hitting a wall and I cannot find the issue nor any logs related to this
when I create the new cluster I can see the master VM load and that is it. nothing else happens and eventually the stack times out with a CREATE_FAILED default-master failed, default-worker failed message
going into orchestration/stacks I can see that is has failed on the `kube_master` resource node with an error of
ResourceGroup "kube_masters" Stack "k8-test-cdcp6jhqp7lt" [c660e72d-5eb6-4073-936b-383644a596a7] Timed out) but the VM Instance is still alive and I can setup ssh to the machine.
i removed my old cluster and created a new one with the intention to ssh to the kube_master and view was was going on inside the host during the cluster creation and it just seems stagnant, nothing actually happens.
i am sure if it a config, logfile or some other obvious thing.
Anyhelp would be appreciated
Thank you.
edit:
typically as I posted this I had a light bulb moment. i found this blog post https://bugs.launchpad.net/openstack-ansible/+bug/1979898 and done some digging and it seems to the the same issue.
it looks like I will have to reconfigure magnum to use the correct .ca
hi folks
I installed kolla ansible and were able to lunch small images but we I got with a large image I got this error
Build of instance 5ede54be-5e82-4847-8b20-181c781e9dc5 aborted: Volume 34650db5-80b5-4407-8ed6-2f7b3f90e237 did not finish being created even after we waited 187 seconds or 61 attempts. And its status is downloading.
how can I fix that
Hi,
I have a VM with an mdev device associated with it. At every VM deletion, the mdev remains allocated and therefore I cannot re-use it. Is there a way to automatically undefine an mdev device at VM deletion?
Also, a customized script to be executed automatically at VM deletion would be ok, something like:
mdevctl stop -u $MDEV_ID
mdevctl undefine -u $MDEV_ID
Is there a way to automatically execute a script like this at VM deletion?
Thanks
hi, I follow this guide https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html to deploy kolla ansible, but there's an error during deployment, can anyone help me with this problem?
OpenStack installation Guide (not the installation of OS service X but general one) seems to be generic for all OS releases not later than Victoria. I conclude this based on following observation: navigate to OS documentation web, then to Installation Guide (manual deployment, or by use of shell scripts; not the deployment automation). Do it while Victoria or later is selected on page most top horizontal bar. Tap/click then on the blue interactive rectangular surface labeled OpenStack Installation Guide. Browser navigates to new page, URL changes to generic one (no more release-specific). This is an introduction to my point so far.
Please follow this navigation path (starting at Open Stack Installation Guide): Environment > Host Networking. Reader gets presented with following possible choices
While two former seem to describe OS plain hosts level the third one seems to include also interconnection point: virtual self-service network (private) to physical network.
If to compare this presentation with one other: Neutron Installation Guide there is one small difference regarding how stuff is presented. I learned that latter Guide is always OS-release specific - contrary to OS general Installation Guide. Overview section of Neutron Installation Guide presents reader with just two possible choices:
In latter case Management Network doesn't get mentioned.
This way it is unclear for me if I should ignore information found about management network. If it is crucial for OS overall understanding to keep always in mind the existence of management network why is Neutron Installation Guide Overview section not mentioning it? If management network is still valid yet used concept where does the area of its significance start and stop? I aim to learn and use 2024.1.
So, I'm working with a backup software provider that integrates with OpenStack. Originally, our installation only had a single region, and it worked great, but after we added a second region, we found we couldn't add it to the backup software.
The reason we couldn't add it is because the backup software expects a unique Keystone address for each OpenStack region. But, that seems crazy to me, because by definition, all of my regions will share the same Keystone installation. If I took my second region and gave it a unique keystone installation, it would no longer be a region of my cloud, it'd become its own standalone cloud, right?
Am I missing something here? How could my second region have its own keystone installation?
Join for this interactive lab session: Platform9 will host the next 0-60 Virtualization Workshop: A Hands-On Lab on Dec 10th and 12th.
This hands-on lab is designed for VMware administrators who are considering an alternative hypervisor (KVM) and virtualization management solution. Engineers from Platform9, many of whom worked at VMware or have extensive experience using VMware will be running these labs using Platform9 Private Cloud Director (PCD). PCD is a production-ready, enterprise-grade virtualization solution that is designed to be easy to use and manage for VMware admins.
Our goal is to have 1 engineer for ~3 participants, to ensure we can provide a high level of interactivity and guidance during the sessions.
Platform9 will be providing the hardware for the lab. However, please ensure that your networks allow outbound SSH connectivity. - There is no cost to participate in the lab.
Introducing vJailbreak:
vJailbreak is a new free tool from Platform9 that discovers your current VMware environment and migrates your VMs, data, and network configurations to Private Cloud Director. See this tool in action on Day 2 where we showcase live migration of your running VMs (with change block tracking and minimum downtime) or offline VMs, with an easy-to-use user interface as well as a powerful underlying API.
Session prerequisites:
Day 1 Schedule -Tuesday, December 10, 2024 at 9 AM PT (2.5 hours)
Day 2 Schedule - Thursday, December 12, 2024 at 9 AM PT (2.5 hours)
I'm trying to deploy openstack by using kolla-ansible. Everything is smooth, however I have a question about how to update and apply changes in globals.yaml
Here are some reasons of change globals.yaml:
network_interface
swift_replication_interface
The kolla-ansible -i INVENTORY reconfigure
command it is seems not to used to config these changes. I do not know which command should be excuted to apply these changes.
Hello everyone hope you're all having a good day
I'm just getting started with openstack ,i've been using devstack for the past few weeks and everything went fine ,the problem is that i've never managed to (monitor) my small cloud project with ceilometer+gnocchi ,not sure if it even works anymore ,what is the best method to deploy monitoring in openstack?
In title line the introduction to Virtual Machine Image Guide quoted. Hence the guide makes a leap from underpinning cloud to the inside of virtual machine - an v.m.-image is in other words v.m. interior.
I would say, virtual machine on its own is useless unless it operates in virtual environment comprising network, remaining items of infrastructure, devices at the edge. Why may guide authors had taken that shortcut from underpinning cloud to interior of virtual machine? What mistakes are( on another hand) in my view?
I’m considering setting up OpenStack for my homelab and wanted to get some insights from those with experience. How reliable has it been for you once it’s set up?
How much management does it require on a regular basis?
Have you encountered frequent issues or failures? If so, how challenging are they to resolve?
Would you say it’s hard to maintain in a smaller-scale setup like a homelab?
I’d really appreciate hearing about your experiences, especially regarding troubleshooting and overall reliability. Thank you in advance!
I have been using openstack with Nvidia Grid vGPU solution. The issue is once a VM with a vGPU is created the VNC remote login no longer works and shows the error "guest has not initialized the display yet". The solution or workaround that I found was to modify the xml file using virsh on the host running the VM, exactly the hostdev segment: <hostdev ..... Model=vfio-pci display=on> i switch the display property to off and now the remote login works and I only need the gpu for cuda not to run graphics. However this is a very complicated workaround and needs to be repeated each time in addition once you power off the VM you will need to redo everything again. Is there a way to modify openstack nova to take into account this parameter? I would assume nova is responsible of generating the configuration and libvirt only implements them on the host. Is such information found on the nova conf files or flavors ? I tried to search in the GitHub repo but no success. Any help is appreciated. Thank you.
Using Kolla-Ansible, I set up OpenStack and now need to configure a public routed IP for the OpenStack dashboard. What’s the best and most efficient way to do this?
I’m trying to set this up as a public cloud. I already have a pool of public IPs and successfully managed to create an external network and assign floating IPs to VMs. However, I’m unsure how to configure the public IP for the dashboard.
If anyone can assist, I’m willing to provide remote access to the setup. Any help would be greatly appreciated!
Here’s my global.yml
file for reference:
GitHub Link
Hello Everyone, Could you give me any advice and help me to better understand neutron. On my VirtualBox VM I properly installed Openstack using packstack (all-in-one installation). I have access to horizon dashboard. I'm able to launch an instance and associate floating IP but from the controller node I cannot reach my instance.
Here are my interfaces config:
VirtualBox Promiscuous mode : Allow All
[root@packstack ~(keystone_admin)]# ip -br -c a
lo UNKNOWN 127.0.0.1/8 ::1/128
enp0s3 UP 9.10.93.8/24 fe80::a00:27ff:fe2e:150a/64
enp0s8 UP 9.11.93.8/24 fe80::a00:27ff:fec7:56ab/64
enp0s9 UP fe80::a00:27ff:fef9:3cc7/64
enp0s10 UP 9.12.93.15/24 fe80::a00:27ff:feff:3641/64
ovs-system DOWN
br-tun DOWN
br-int DOWN
br-ex UNKNOWN 9.12.93.8/24 fe80::b021:85ff:fe8a:9d44/64 qbr9eefea66-89 UP qvo9eefea66-89@qvb9eefea66-89 UP fe80::1409:2aff:feb4:e37d/64 qvb9eefea66-89@qvo9eefea66-89 UP fe80::8c84:15ff:fe7d:8896/64 tap9eefea66-89 UNKNOWN fe80::fc16:3eff:fec6:5f5c/64
Instances status[root@packstack ~(keystone_admin)]# openstack server list +--------------------------------------+-----------+--------+----------------------------------+--------------------------+-----------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-----------+--------+----------------------------------+--------------------------+-----------+ | 8d2e2f04-8080-44df-923f-9728ebabe9e5 | testrocky | ACTIVE | private1=10.0.0.202, 9.12.93.203 | N/A (booted from volume) | m1.devops | +--------------------------------------+-----------+--------+----------------------------------+--------------------------+-----------+
[root@packstack ~(keystone_admin)]# ip netns list
qdhcp-2a02741e-35f0-4a61-81b0-abd4b5a09f36 (id: 2)
qdhcp-bcc1c132-074f-45d5-a715-a2d371cdb5be (id: 1)
qrouter-a4c63603-b8e8-460a-bbc7-47503fe6cc8e (id: 0)
[root@packstack ~(keystone_admin)]# ip netns exec qrouter-a4c63603-b8e8-460a-bbc7-47503fe6cc8e ping 9.12.93.1
PING 9.12.93.1 (9.12.93.1) 56(84) bytes of data.
From 9.12.93.201 icmp_seq=1 Destination Host Unreachable
From 9.12.93.201 icmp_seq=2 Destination Host Unreachable
[root@packstack ~(keystone_admin)]# ip netns exec qrouter-a4c63603-b8e8-460a-bbc7-47503fe6cc8e ping 9.12.93.203
PING 9.12.93.203 (9.12.93.203) 56(84) bytes of data.
64 bytes from 9.12.93.203: icmp_seq=1 ttl=64 time=9.83 ms
[root@packstack ~(keystone_admin)]# ping 9.12.93.203
PING 9.12.93.203 (9.12.93.203) 56(84) bytes of data.
From 9.12.93.15 icmp_seq=1 Destination Host Unreachable
From 9.12.93.15 icmp_seq=2 Destination Host Unreachable
[root@packstack ~(keystone_admin)]# openstack port list --network public1
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------+--------+
| ID | Name | MAC Address | Fixed IP Addresses | Status |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------+--------+
| 2b215f41-edf8-4c61-8969-383143340444 | | fa:16:3e:30:7e:08 | ip_address='9.12.93.200', subnet_id='01aff9ec-e22c-47d3-b92e-192b01c8281a' | ACTIVE |
| 31d7b194-50a0-4a25-b102-542210e5f3f3 | | fa:16:3e:28:39:a9 | ip_address='9.12.93.203', subnet_id='01aff9ec-e22c-47d3-b92e-192b01c8281a' | N/A |
| 68351942-28a1-4df3-8661-bf157fcd5982 | | fa:16:3e:bf:66:56 | ip_address='9.12.93.201', subnet_id='01aff9ec-e22c-47d3-b92e-192b01c8281a' | ACTIVE |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------+--------+
[root@packstack ~(keystone_admin)]# openstack port show 31d7b194-50a0-4a25-b102-542210e5f3f3
+-------------------------+----------------------------------------------------------------------------+
| Field | Value |
+-------------------------+----------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | |
| binding_profile | |
| binding_vif_details | |
| binding_vif_type | unbound |
| binding_vnic_type | normal |
| created_at | 2024-11-15T15:57:42Z |
| data_plane_status | None |
| description | |
| device_id | 3dc9d9c3-28eb-4dfb-a41b-9bbfac9f96da |
| device_owner | network:floatingip |
| device_profile | None |
| dns_assignment | None |
| dns_domain | None |
| dns_name | None |
| extra_dhcp_opts | |
| fixed_ips | ip_address='9.12.93.203', subnet_id='01aff9ec-e22c-47d3-b92e-192b01c8281a' |
| hardware_offload_type | None |
| hints | |
| id | 31d7b194-50a0-4a25-b102-542210e5f3f3 |
| ip_allocation | None |
| mac_address | fa:16:3e:28:39:a9 |
| name | |
| network_id | bcc1c132-074f-45d5-a715-a2d371cdb5be |
| numa_affinity_policy | None |
| port_security_enabled | False |
| project_id | |
| propagate_uplink_status | None |
| resource_request | None |
| revision_number | 2 |
| qos_network_policy_id | None |
| qos_policy_id | None |
| security_group_ids | |
| status | N/A |
| tags | |
| trunk_details | None |
| updated_at | 2024-11-15T15:57:43Z |
+-------------------------+----------------------------------------------------------------------------+
I couldn't find something related to the binding port in these logs file.
Any advise will be welcome
Helllo everybody, I was trying to deploy an OpenStack using Kayobe on a Ubuntu 22.04. I took as a reference the deployment made by stackHPC (https://github.com/stackhpc/a-universe-from-nothing). It goes thorugh well until it reaches the command "kayobe seed service deploy" specifically in this phase:
PLAY [Apply role bifrost] **********************************************************************************************************************
TASK [Gathering Facts] *************************************************************************************************************************
ok: [seed]
TASK [bifrost : include_tasks] *****************************************************************************************************************
included: /root/my_test/venvs/kolla-ansible/share/kolla-ansible/ansible/roles/bifrost/tasks/deploy.yml for seed
TASK [bifrost : Ensuring config directories exist] *********************************************************************************************
ok: [seed] => (item=bifrost)
TASK [bifrost : Generate bifrost configs] ******************************************************************************************************
ok: [seed] => (item=bifrost)
ok: [seed] => (item=dib)
ok: [seed] => (item=servers)
TASK [bifrost : Template ssh keys] *************************************************************************************************************
ok: [seed] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})
ok: [seed] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})
ok: [seed] => (item={'src': 'ssh_config', 'dest': 'ssh_config'})
TASK [bifrost : Starting bifrost deploy container] *********************************************************************************************
changed: [seed]
TASK [bifrost : Ensure log directories exist] **************************************************************************************************
changed: [seed]
TASK [bifrost : Bootstrap bifrost (this may take several minutes)] *****************************************************************************
***ERROR***
I checked the seed vm and there's nothing wrong it. It stops almost immidiately with the following error: "sudo: unable to resolve host seed: Name or service not known". I even tried adding in seed machine in /etc/hosts: "seed_ip seed", but it doens't make a difference. This doesn't make any sense because seed is well defined everywhere. (NOTE: container of biforst is taken using: kayobe seed container image build --push)
Hi folks
1 What are the benefits for using ceph for storage and what are the other options available and how ceph is compared to them
2 Also if i have 2tb of storage what would happen if i added a node with 3tb of storage meaning having unequal size of hard drives
3 also what if i have different types like ssd and nvme what would happen
Im trying to use magnum service so I just enabled to my cluster (2024.1) But now when I try to create a template I receive an error. Browsing the logs I found this:
2024-11-16 21:11:53.782 3667 ERROR wsme.api [None req-af293014-9047-4e23-b342-70bd1a48e517 848fa3b73c7840be92d9c5bd269f3233 9cadae2845f04f1fad03b44cec971692 - - ef0a4f603570470883e1b027ce981c25 -] Server-side error: "Configuration file ~/.kube/config not found"
Im missing something? why should I specify kubeconf ?
my template creation example:
openstack coe cluster template create k8s-flan-large-41 \
--image Fedora-CoreOS-41 \
--keypair mykey \
--external-network external \
--dns-nameserver 192.168.40.5 \
--flavor m2.large \
--master-flavor m2.large \
--volume-driver cinder \
--docker-volume-size 10 \
--network-driver flannel \
--docker-storage-driver overlay2 \
--coe kubernetes \
Hi all,
I am toying around with OpenStack Dalmatian. I am manually installing OpenStack to learn but something is unclear.
I want to add additional controller nodes for redundancy, (Keystone, Neutron, Etc) but it's unclear to me exactly how you do that.
I am assuming for the DB, you install another DB on controller 2 and setup a cluster with replication, then install services and configure as normal. The documentation is not clear on how this is done.
Hi guys,
Not sure if anyone has notice this issue yet but I enabled Ceph RGW with Keystone using swift API. I can create the containers/buckets via CLI and can confirm they were created. But if I check the object store section in Openstack Skyline's GUI it does show anything, just a 503 error. Horizon shows the container/bucket fine.
Hello friends!
I hope everyone is well.
I've activated an openstack cluster in my lab and I'm getting a very strange error and I'd like your help.
When I try to start a new server stances, I get this error.
ERROR nova.network.neutron - The [neutron] section of your nova configuration file must be configured for authentication with the networking service endpoint. See the networking service install guide for details: https://docs.openstack.org/neutron/latest/install/
ERROR nova.compute.manager - Instance failed network setup after 1 attempt(s): neutronclient.common.exceptions.Unauthorized: Unknown auth type: None
Does anyone have any idea about this error
Has anyone installed SentinelOne Agent on OpenStack & KVM servers? If so, has it caused any issues?
hi folks
I installed openstack on a remote server and I can access the dashboard from my local network but I can't access it from outside I have done port forwarding
the error message I got
This site can’t be reached
The webpage at **http://myip:port** might be temporarily down or it may have moved permanently to a new web address.
ERR_UNSAFE_PORT
I am working as a desktop support Engineer in a small company i have completed cl-110 now I have scheduled one openstack profile role what are the questions get asked by interviewer can someone guided me for interview??
In openstack, with amd will able to provision virtual gpu ?
What are the best practices for (application) high availability for multiple regions? What is the thought out scenarios for regions? should my application be living in multiple regions? If so, how do I make it reachable from multiple regions?
If an applikation should be contained to one Region, how would I migrate/recover an application in another region?
Is there a way do dynmically make FIPs available in another region when one fails? BGP can generally do that, but how do I make sure they are available in Openstack?
Last question is regarding mutliple region setup an keystone. At least in kolla ansible, there is only one keystone instance for all regions, so if the first region where keystone lives goes down, the auth-service for all regions also goes down. How can this be made HA?
If someone interested
Kolla-ansible from my side is easiest and flexible way to deploy Openstack. Guys who are managing project are awesome https://launchpad.net/~kolla-drivers/+members#active
Official Announcement: https://lists.openstack.org/archives/list/release-announce@lists.openstack.org/thread/N5CDPAUTL57OBBISJUXGJGMGYJEEOCYV/
Release notes : https://docs.openstack.org/releasenotes/kolla-ansible/2024.2.html
Please report issues through:
https://bugs.launchpad.net/kolla-ansible/+bugs
Upgrade procedure https://docs.openstack.org/kolla-ansible/2024.2/user/operating-kolla.html