/r/openstack
Subreddit dedicated to news and discussions about OpenStack, an open source cloud platform.
OpenStack is a collection of software which enables you to create and manage a cloud computing service similar to Amazon AWS or Rackspace Cloud. This subreddit exists as a place for posting information, asking questions, and discussing news related to this technology.
More information on OpenStack can be obtained via the following external resources:
/r/openstack
Hello everyone,
Im testing openstack 2023.2 with multinode, Im using kolla-ansible.
my topo have 3 node, 2 controller on node1 and node 2, compute on all node, each node have two interface, ens3 is the mgt and ens6 for provider network. When I create instance on node 1 and node ( two node run ctl and compute), I can access console success, but when I create instance on compute node 3 ( only run compute service), I can not access with console.
I check log on nova-novncproxy.log, it notice Request timed out: TimeoutError(110, 'ETIMEDOUT'),
Do anyone help me debug this case? Thanks.
So its my last hope here.
We have a school project to make a private cloud with our laptops. So far everything went well, followed the documents and could create instances.
The issue is I cant ping 8.8.8.8, host IP and the instances cant ping each other.
I have configured everything using the option 1 : provider network
Edit : Version I'm using is Antelope
Hi,
I was searching for a openstack agentless backup solution can some one suggest what exactly they use in enterprise/MSP multitenant environment ?
Ceph or withour Ceph (FC).
Thank you
Has anyone attempted deploying multiple regions using Charmed OS? I have a maas controller in each region and juju controller with a separate model for each region (region1, regions2). It seems cross model relations is what I want, but can find very little information online as to how to use this to point the second region and the Keystone in region1. Of course region2 will rely on region1, but will looking DR solution for that later.
I am building a small OpenStack cloud in my lab and had a few questions.
I want to use OpenStack Ansible for the deployment.
I have 5 physical servers each with >128GB and 2 XEON CPUs each with 24 Cores
3 of the 5 servers are currently running VMware ESXI managed by vCenter in a cluster.
My questions:
I want to use my remaining two servers as Nova compute hosts. Can the other remaining OpenStack services be installed on vm's running in my VMware environment? Would this be acceptable production? What would be a better solution if not?
Can I separate OpenStack services and install them on their own vm's? like 2 VM's for keystone, 2 for neutron, etc? The documentation talks about a control node which has a bunch of services bundled together like horizon, keystone, etc. However, I was curious about this approach.
Eventually, I want to get off VMware since they were purchased by Broadcom but for now I still need ESXI for some of my workloads.
I'm using Openstack Yoga. I created an instance with 400GB of disk, but after 60 retries the volume allocation had failed (now I've changed block_device_allocate_retries
to 500 in nova.conf
).
I later deleted the failing instance but the associated 400GB volume would not be deleted. I tried to manually wipe the volume using commands:
cinder reset-state --reset-migration-status VOLUME_ID
cinder reset-state --attach-status detached VOLUME_ID
cinder delete VOLUME_ID
but the volume remained in the "deleting_error
" state, so I followed this other procedure to delete the volume directly from the cinder db:
#mysql > use cinder;
Set the cinder volume state available:
#mysql> update volumes set attach_status=’detached’,status=’available’ where id ='<volumeid’;
Comeout from mysql prompt and try to delete the volume using volume id:
#cinder delete VOLUME_ID
Since this procedure didn't work anyway, I ran this command in the cinder db:
#mysql > update volumes set deleted=1,status=’deleted’,deleted_at=now(),updated_at=now() where deleted=0 and id='<volumeid>’;
PROBLEM:
If I run the openstack volume list
command, the volume is deleted because it no longer exists in the list, but from the openstack dashboard, in the pie charts, that 400GB volume space is still assigned, in fact if I try to create another 400GB instance, it is not possible because it results in VolumeSizeExceedsAvailableQuota
.
PLUS: I also noticed that from the Openstack dashboard the instance I had created no longer existed after deleting it, while when running openstack server list
, the instance was still there. So I ran openstack server delete INSTANCE_ID
thinking it would solve all my problems, but now the instance no longer exists, nor does the 400gb volume associated with it, but in the volume dashboard the 400gb are always occupied and I actually can't create a new one 400gb instance.
Solved: See comment below
There was a similar question asked a while back about images/glance, but this is different enough I thought it warranted its own thread. If I blow away and redeploy openstack and reconnect it to the same ceph cluster, is there a way to get cinder to know about/slurp in those existing volumes in the ceph cluster? I know the obvious way is to export the volume out of ceph, then upload it as a glance image on the new, freshly deployed openstack, done that loads of times before, but that seems silly to download and reupload to ceph essentially, given the volume is already sitting there in the volumes pool. ChatGPT suggests creating a nothingburger volume then hacking the cinder database to point to the rbd location, but that doesn't sound like a safe/sane approach. I'm on yoga and reef if it makes a difference.
sudo systemctl start mariadb.service
Job for mariadb.service failed because the control process exited with error code.
See "systemctl status mariadb.service" and "journalctl -xe" for details.
Plz help me resolve this
Can you provision a minikube cluster in an openstack instance? If yes, how should one go about to set the required configurations for it?
Don't you want to find a mentor/mentee? (Channel entry: https://webchat.oftc.net/?channels=openstack-mentoring)
About: https://docs.openstack.org/contributors/common/mentoring.html#openstack-mentoring
Hello everyone,
I need help with migrating a VM running Oracle Linux 7.9 from VMware Workstation to OpenStack. The root and swap directories are mounted as LVM.
The root and swap directories are mounted as LVM.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 250G 0 disk
├─sda2 8:2 0 249G 0 part
│ ├─ol-root 251:1 0 231.1G 0 lvm /
│ └─ol-swap 251:0 0 17.9G 0 lvm [SWAP]
└─sda1 8:1 0 1G 0 part /boot
On the source VM, the fstab uses the LVM label. However, when I import it to OpenStack, it fails saying /dev/mapper/ol-root does not exist.
I was able to recover this machine by mounting the disk to another instance on OpenStack and running the commands:
dracut --regenerate-all -f && grub2-mkconfig -o /boot/grub2/grub.cfg.
But I wanted to fix this issue for future machines migration so I did the following:
1- installed Virtio drivers following this guide .
grep -i virtio /boot/config-$(uname -r)
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_VIRTIO_VSOCKETS=m
CONFIG_VIRTIO_VSOCKETS_COMMON=m
CONFIG_NET_9P_VIRTIO=m
CONFIG_VIRTIO_BLK=m
CONFIG_VIRTIO_BLK_SCSI=y
CONFIG_SCSI_VIRTIO=m
CONFIG_VIRTIO_NET=m
CONFIG_VIRTIO_CONSOLE=m
CONFIG_HW_RANDOM_VIRTIO=m
CONFIG_DRM_VIRTIO_GPU=m
CONFIG_VIRTIO=y
CONFIG_VIRTIO_PCI_LIB=m
CONFIG_VIRTIO_PCI_LIB_LEGACY=m
CONFIG_VIRTIO_MENU=y
CONFIG_VIRTIO_PCI=m
CONFIG_VIRTIO_PCI_LEGACY=y
CONFIG_VIRTIO_VDPA=m
CONFIG_VIRTIO_PMEM=m
CONFIG_VIRTIO_BALLOON=m
CONFIG_VIRTIO_INPUT=m
CONFIG_VIRTIO_MMIO=y
lsinitrd /boot/initramfs-$(uname -r).img | grep virtio
-rw-r--r-- 1 root root 22 Apr 10 06:50 etc/modules-load.d/virtio.conf
-rw-r--r-- 1 root root 19888 Mar 5 11:18 usr/lib/modules/5.4.17-2136.329.3.1.el7uek.x86_64/kernel/drivers/block/virtio_blk.ko.xz
-rw-r--r-- 1 root root 47880 Mar 5 11:18 usr/lib/modules/5.4.17-2136.329.3.1.el7uek.x86_64/kernel/drivers/net/virtio_net.ko.xz
2-Changed the fstab to use UUID:
# Created by anaconda on Wed Mar 13 13:26:14 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=a55f4689-cb4a-4173-a307-4f3dd1393b3c / xfs defaults 0 0
UUID=58487591-22fd-48c2-9a60-34904e6a4c6d /boot xfs defaults 0 0
#/dev/mapper/ol-home /home xfs defaults 0 0
UUID=7749f56e-fbcd-4e20-8d55-6db5462401bf swap swap defaults 0 0
3- updated the /etc/defults/grub to use UUID and made sure that /boot/grub2/grub.cfg is using it.
linux16 /vmlinuz-5.4.17-2136.329.3.1.el7uek.x86_64 ro crashkernel=auto root=UUID=a55f4689-cb4a-4173-a307-4f3dd1393b3c swap=UUID=7749f56e-fbcd-4e20-8d55-6db5462401bf rhgb quiet numa=off transparent_hugepage=never
After making these changes, I imported the VM to OpenStack again, but now I'm getting a new error: /dev/disk/by-uuid does not exist.
I've been stuck with this issue for days and haven't found a solution that works. Any help would be greatly appreciated!
I have 4 physical severs on-prim and I will deploy Open stack private cloud
my question is what is the name of technology that combine all 4 servers as one computing pool
what I have in mind is:
in other words: I need a tool to control different Virtual machines in one platform
Greetings Community,
After deployed Openstack 2023.1 via Kolla-Ansible stuck at very interesting issue.
Topology: 1-node ( roles: controller-network) 1 - node(roles: compute)
Deploy OS with Magnum all fine all stuff working.
Topology: 3-nodes ( roles: controller-network) 1 - node(roles: compute)
Deploy OS with Magnum issues when deploy k8s cluster via magnum.
Error: Failed to create trustee or trust for Cluster: 7df45e89-0566-41d4-9bea-eb5e4c7eeddb
#magnum.conf
[trust]
cluster_user_trust = True
Logs did not help with anything interesting...
Hey, someone know a free and opensource billing software for hosting openstack ?
Hello Reddit Community!
For several days, I have been attempting to install my first OpenStack environment for testing purposes. However, I always get stuck at:
"Timed out while waiting for services to come online (20/23)"
Error: Timed out while waiting for model 'openstack' to be ready.
I would appreciate any help. Where can I find logs? What keywords should I use to search on Google?
Thank you for your assistance!
I'm working with a public cloud that's actually OpenStack under the hood.
The flavors I can choose from while creating an instance all have their corresponding "disk" size (e. g. 200GB or 500GB). I can choose an OS image to preload onto the instance, and I end up with a VM with a single disk of the specified size with the OS installed onto it.
it does not show up in "volumes", and this root disk has very different performance profile compared to regular block volumes. The block volumes are specced for 3000 IOPS and 120 MB/s, while the boot disks are capable of multi-GB/s throughput and capped at 60k IOPS.
The problem is that I do not know how to control or manage this "root disk", apart of creating an instance from a predefined OS image. If I try to create an instance from an uploaded ISO, what I get is a machine with no writable disks and a 500GB /dev/sr0
, which is not exactly what I want.
What exactly is this "root disk" and how do I manage it? How do I create an instance with, say, an empty root disk and a separately attached bootable ISO image? Or even an empty root disk and a bootable volume (such that I could install the OS onto the slow volume and use the extremely fast root disk for the actual application I'm trying to run)?
Hi all,
I've been trying to setup a physical OS cluster using 6 servers with the following roles:
[control]
# These hostname must be resolvable from your deployment host
OS-POC-MGMT-01
OS-POC-MGMT-02
OS-POC-MGMT-03
# The above can also be specified as follows:
#control[01:03] ansible_user=kolla
# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
OS-POC-MGMT-01
OS-POC-MGMT-02
OS-POC-MGMT-03
[compute]
OS-POC-COMPUTE-01
OS-POC-COMPUTE-02
OS-POC-COMPUTE-03
[monitoring]
OS-POC-MGMT-01
OS-POC-MGMT-02
OS-POC-MGMT-03
# When compute nodes and control nodes use different interfaces,
# you need to comment out "api_interface" and other interfaces from the globals.yml
# and specify like below:
#compute01 neutron_external_interface=eth0 api_interface=em1 tunnel_interface=em1
[storage]
OS-POC-COMPUTE-01
OS-POC-COMPUTE-02
OS-POC-COMPUTE-03
[deployment]
OS-POC-MGMT-01
And I am using the following multinode configuration:
---
workaround_ansible_issue_8743: yes
kolla_base_distro: "debian"
kolla_internal_vip_address: "10.1.0.10"
kolla_internal_fqdn: "vip.os-poc-internal"
kolla_external_vip_address: "172.19.120.200"
kolla_external_fqdn: "openstack-poc.<REDACTED>"
kolla_external_vip_interface: "os_external"
api_interface: "os_api"
tunnel_interface: "os_tunnel"
neutron_external_interface: "internet,office"
neutron_bridge_name: "br-ex1,br-ex2"
neutron_plugin_agent: "ovn"
kolla_enable_tls_internal: "yes"
kolla_enable_tls_external: "yes"
kolla_copy_ca_into_containers: "yes"
openstack_cacert: "/etc/ssl/certs/ca-certificates.crt"
kolla_enable_tls_backend: "yes"
openstack_region_name: "<REDACTED>"
enable_openstack_core: "yes"
enable_cinder: "yes"
enable_magnum: "yes"
enable_zun: "yes"
ceph_glance_user: "os_poc_glance"
ceph_glance_keyring: "client.{{ ceph_glance_user }}.keyring"
ceph_glance_pool_name: "os_poc_images"
ceph_cinder_user: "os_poc_cinder"
ceph_cinder_keyring: "client.{{ ceph_cinder_user }}.keyring"
ceph_cinder_pool_name: "os_poc_volumes"
ceph_cinder_backup_user: "os_poc_cinder-backup"
ceph_cinder_backup_keyring: "client.{{ ceph_cinder_backup_user }}.keyring"
ceph_cinder_backup_pool_name: "os_poc_backups"
ceph_nova_user: "os_poc_nova"
ceph_nova_keyring: "client.{{ ceph_nova_user }}.keyring"
ceph_nova_pool_name: "os_poc_vms"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
nova_compute_virt_type: "kvm"
neutron_ovn_distributed_fip: "yes"
All nodes have 4 interfaces assigned to 2 LACP bonds called bond0 and os_neutron_ex.
Both interfaces are trunks and have VLAN interfaces:
I've created the 2 networks in OS, but no matter what I do I am unable to connect to the VMs from those networks. Can anyone help me out in how to debug this?
Hello There! I am currently a software engineer(BE) working for a startup in UAE- Dubai. And Im thinking about starting my journey in Openstack. But first I want you to know why Im thinking about taking this decision and tell me if those are valid reasons.
1- Prior to being a SDE I was into technical support and application management, I had to work with servers, networks, databases and everything a support engineer is involved into (old school stuff) and I really liked the experience. therefore, starting to learn Openstack would not be an issue since I have previous required experiences (combined with SDE experience).
2- SDE and infrastructure are very well interconnected and having a strong knowledge in managing infrastructure would be a great skill to acquire. So Learning Openstack will increase my employment chances (A developer who is also a cloud engineer). but note sure about this point?
3- web service development is not something that really excites me anymore, and I would like to take my coding skills into the cloud (infrastructure as code, scripting) plus I really miss living inside the server and data centers to setup stuff.
4- I have a feeling (just a feeling) that there will be a time where some companies will dump the cloud and begin to build their own cloud. So it is nice to position myself for that moment (honestly where do you think things are going??)
5- I like Openstack, I like what is stands for and I love opensource
6- I'm putting a plan(teaching myself) to start contributing to the project itself.
Your thoughts are welcomed , appreciate the advice.
I want to install openstack and im hesitating between zed , antelope 2023.1 and 2023.2
so my question is the following : which version is more stable and production ready ?
Has anyone managed to make cloud-init work properly with freebsd? I'm having a hard time passing the hostname and IP address to the vm..
when given sudo apt install mysql-server; The following packages have unmet dependencies:
mysql-server : Depends: mysql-cluster-community-server (= 8.0.36-1ubuntu20.04) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
I generated a keypair in OpenStack and fired up an Ubuntu VM with it. The metadata service and networking is fine, the VM gets the correct name, etc. I can see in the log that cloud-init references my public key, but when I try to SSH using my private key, I get the message 'Sever refused our key' and 'No supported authentication methods available (server sent: publickey). What gives? Anyone know?
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2 "No such file or directory")
when command sudo service mysql restart is given :- Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details.
For as long as I've used OpenStack, I've had to create two keypairs for cloud-init purposes: one that could be used to encrypt Windows VM admin passwords, and another that could be injected into Linux VMs for passwordless/ssh auth. That is because I've never found a way to create a keypair that works in both cases.
The SSH keypairs that you can generate through Horizon work for encrypting/decrypting Windows passwords, but they don't seem to work for passwordless auth on an Ubuntu cloud image, etc. Similarly if I create an ed25519 encrypted keypair in Puttygen and import it into OpenStack, that will work fine for passworldess auth on Ubuntu but I can't encrypt/decrypt Windows passwords with it.
Anyone know how to generate a keypair that would work universally? Thanks!
Hi, if I pass my requests through a load balancer (Octavia), the response, which is a server-sent event, is slow, as if there were some kind of buffering in the load balancer before sending a data chunk. If I take an IP address from a server behind the load balancer and make the same request, the problem doesn't exist. The total response time is the same, but the data chunks arrive in waves rather than continuously if the request is made via a load balancer.
Is the problem linked to buffering or something else? Are there any parameters I can set to solve the problem?
If anyone can solve this problem, thank you in advance!
I'm encountering a problem with my Kolla Ansible installation where two IP addresses are being assigned to the same port or interface, specifically with the
kolla_internal_vip_address
. This is causing connectivity issues and preventing me from accessing the server. Has anyone else experienced a similar issue, and if so, how did you resolve it? Any advice or suggestions would be greatly appreciated. Thank you!
so I have been uninstalling and reinstalling openstack for 2 days now on my 2 hp ProLiant g7s. first I was stalling on the bootstrap, internets said get an ssd so I got 16 of them.... excessive but I am really trying to get a solid homelab set up and nothing is better than faster drives. so I managed to get past the bootstrap on both machines, but now I cannot get the dashboard up and running. I followed just about every install tutorial without any luck. I have tried the URL that openstack gives me iv tried the hostname, http, and https. I'm at a loss and ready to quit.