/r/EnvoyProxy
Envoy Proxy is an open source edge and service proxy, designed for cloud-native applications.
Envoy Proxy Resources
Related Subs
Rules
Be a good citizen.
This is a user run unofficial subreddit, we do not represent the Envoy brand and are not affiliated with or endorsed by any company.
/r/EnvoyProxy
istio envoy filter oauth2 works at SIDECAR_INBOUND context but not GATEWAY
I am trying to utilize the oauth2 envoy filter initially referencing this example. This works, but when I switch the Context to GATEWAY
and change the workload selector, I get passthrough.
I have a new session so nothing is stored, I have debugging enabled and am not seeing any errors on the gateway or istiod. We have the response header modification as one of the patches and can see the change happening with this config, so we know it's evaluating the filter.
I've found multiple posts of people doing something similar, and want to keep this at the gateway level, since using the sds config example, if we kept the context to SIDECAR_INBOUND, every envoy proxy pod would need to mount the secret, and we'd need to put the secret in every namespace.
Another thing I could possible do is look into standing up an sds server and exposing via the sds server and having the proxy's.
Hi all, I have a discrepancy between aws dto(data over the network) and between the request size I report to the database. I added a check that the envoy(sidecar) actually goes out to the upstream server by checking the x-envoy-upstream-service-time. But the header seems to be only on 25% of requests. Am I missing something?
I'm using Envoy Gateway, my upstream IDP requires a user-agent for their oauth api, this includes the jwks url.
I would like to Envoy Proxy to add the user-agent header to this request or even all requests. This seems to be something I can do using the cluster upstream filters but I'm not sure how to make that happen.
Any advice would be appreciated.
I am trying to build envoy on my vm with fips compliant boring ssl, but the build fails and I am unable to understand why even with verbose build option.
this is what I did:
failure:
issue building envoy with boring fips Executing genrule u/boringssl_fips//:build failed envoy fips
with debugging, it gave me the command it was trying to run
xxx/.cache/bazel/_bazel_opc/install/20da5ab742b8d3d499c34fdafcd3c8b8/linux-sandbox -t 15 -w xxx/.cache/bazel/_bazel_opc/cdf3d754b8095fbcb6565a460418c1ae/sandbox/linux-sandbox/2233/execroot/envoy -w /tmp -w /dev/shm -S xxx/.cache/bazel/_bazel_opc/cdf3d754b8095fbcb6565a460418c1ae/sandbox/linux-sandbox/2233/stats.out -D -- /bin/bash -c 'source external/bazel_tools/tools/genrule/genrule-setup.sh; bazel/external/boringssl_fips.genrule_cmd bazel-out/k8-opt/bin/external/boringssl_fips/crypto/libcrypto.a bazel-out/k8-opt/bin/external/boringssl_fips/ssl/libssl.a')
Any tip?
Can the connection in envoy be delayed to switch between services such as Quic for video services and TCP for other services?
Is anyone using HTTP/3 Quic at scale?
"Configuring auto_config with http3_protocol_options will result in Envoy attempting to use HTTP/3 for endpoints which have explicitly advertised HTTP/3 support via an alt-svc header. When using auto_config with http3_protocol_options, Envoy will attempt to create a QUIC connection, then if the QUIC handshake is not complete after a short delay, will kick off a TCP connection, and will use whichever is established first. Downstream Envoy HTTP/3 support can be turned up via adding quic_options, ensuring the downstream transport socket is a QuicDownstreamTransport, and setting the codec to HTTP/3"
-- Scott Beeker
Hi everyone,
I am new to EnvoyProxy. Can you suggest the best resources to learn. It can be online courses, docs etc..
Hello!
We are currently evaluating Envoy for use as a proxy to route all our internet traffic through HTTPS. However, we are encountering some problems when we start transmitting data.
[root@ubuntu]# curl -v -x 10.10.10.10:8081 https://google.com
* Trying 10.10.10.108081...
* Connected to 10.10.10.10 (10.10.10.10) port 8081 (#0)
* allocate connect buffer
* Establish HTTP proxy tunnel to google.com:443
> CONNECT google.com:443 HTTP/1.1
> Host: google.com:443
> User-Agent: curl/8.0.1
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 200 OK
< date: Tue, 19 Dec 2023 12:35:06 GMT
< server: envoy
<
* CONNECT phase completed
* CONNECT tunnel established, response 200
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
* CApath: none
* OpenSSL/3.0.8: error:0A00010B:SSL routines::wrong version number
* Closing connection 0
curl: (35) OpenSSL/3.0.8: error:0A00010B:SSL routines::wrong version number
On the envoy logs I can hardly see the below errors:
[2023-12-19 12:36:56.961][38170][trace][connection] [source/common/network/connection_impl.cc:423] [C2] raising connection event 2
[2023-12-19 12:36:56.961][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C2] socket event: 3
[2023-12-19 12:36:56.961][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C2] write ready
[2023-12-19 12:36:56.961][38170][trace][connection] [source/common/network/connection_impl.cc:608] [C2] read ready. dispatch_buffered_data=0
[2023-12-19 12:36:56.961][38170][trace][connection] [source/common/network/raw_buffer_socket.cc:24] [C2] read returns: 111
[2023-12-19 12:36:56.961][38170][trace][connection] [source/common/network/raw_buffer_socket.cc:38] [C2] read error: Resource temporarily unavailable
[2023-12-19 12:36:56.961][38170][debug][connection] [./source/common/network/connection_impl.h:98] [C2] current connecting state: false
[2023-12-19 12:36:56.961][38170][debug][connection] [source/common/network/connection_impl.cc:941] [C3] connecting to 142.250.187.238:443
[2023-12-19 12:36:56.962][38170][debug][connection] [source/common/network/connection_impl.cc:960] [C3] connection in progress
[2023-12-19 12:36:56.964][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C3] socket event: 2
[2023-12-19 12:36:56.964][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C3] write ready
[2023-12-19 12:36:56.964][38170][debug][connection] [source/common/network/connection_impl.cc:688] [C3] connected
[2023-12-19 12:36:56.964][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_handshaker.cc:93] [C3] ssl error occurred while read: WANT_READ
[2023-12-19 12:36:56.972][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C3] socket event: 3
[2023-12-19 12:36:56.972][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C3] write ready
[2023-12-19 12:36:56.972][38170][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:360] [C3] Async cert validation completed
[2023-12-19 12:36:56.972][38170][trace][connection] [source/common/network/connection_impl.cc:423] [C3] raising connection event 2
[2023-12-19 12:36:56.972][38170][trace][connection] [source/common/network/connection_impl.cc:362] [C3] readDisable: disable=true disable_count=0 state=0 buffer_length=0
[2023-12-19 12:36:56.972][38170][trace][connection] [source/common/network/connection_impl.cc:362] [C3] readDisable: disable=false disable_count=1 state=0 buffer_length=0
[2023-12-19 12:36:56.973][38170][trace][connection] [source/common/network/connection_impl.cc:483] [C2] writing 71 bytes, end_stream false
[2023-12-19 12:36:56.973][38170][trace][connection] [source/common/network/connection_impl.cc:608] [C3] read ready. dispatch_buffered_data=0
[2023-12-19 12:36:56.973][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:87] [C3] ssl read returns: -1
[2023-12-19 12:36:56.973][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:127] [C3] ssl error occurred while read: WANT_READ
[2023-12-19 12:36:56.973][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:163] [C3] ssl read 0 bytes
[2023-12-19 12:36:56.973][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C3] socket event: 2
[2023-12-19 12:36:56.973][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C3] write ready
[2023-12-19 12:36:56.973][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C2] socket event: 2
[2023-12-19 12:36:56.973][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C2] write ready
[2023-12-19 12:36:56.973][38170][trace][connection] [source/common/network/raw_buffer_socket.cc:67] [C2] write returns: 71
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C2] socket event: 3
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C2] write ready
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/connection_impl.cc:608] [C2] read ready. dispatch_buffered_data=0
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/raw_buffer_socket.cc:24] [C2] read returns: 517
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/raw_buffer_socket.cc:38] [C2] read error: Resource temporarily unavailable
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/connection_impl.cc:483] [C3] writing 517 bytes, end_stream false
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C3] socket event: 2
[2023-12-19 12:36:56.975][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C3] write ready
[2023-12-19 12:36:56.976][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:269] [C3] ssl write returns: 517
[2023-12-19 12:36:57.077][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C3] socket event: 3
[2023-12-19 12:36:57.077][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C3] write ready
[2023-12-19 12:36:57.077][38170][trace][connection] [source/common/network/connection_impl.cc:608] [C3] read ready. dispatch_buffered_data=0
[2023-12-19 12:36:57.077][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:87] [C3] ssl read returns: 179
[2023-12-19 12:36:57.077][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:87] [C3] ssl read returns: 0
[2023-12-19 12:36:57.077][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:127] [C3] ssl error occurred while read: SYSCALL
[2023-12-19 12:36:57.077][38170][trace][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:163] [C3] ssl read 179 bytes
[2023-12-19 12:36:57.078][38170][debug][connection] [source/common/network/connection_impl.cc:139] [C3] closing data_to_write=0 type=1
[2023-12-19 12:36:57.078][38170][debug][connection] [source/common/network/connection_impl.cc:250] [C3] closing socket: 1
[2023-12-19 12:36:57.078][38170][debug][connection] [source/extensions/transport_sockets/tls/ssl_socket.cc:321] [C3] SSL shutdown: rc=0
[2023-12-19 12:36:57.078][38170][trace][connection] [source/common/network/connection_impl.cc:423] [C3] raising connection event 1
[2023-12-19 12:36:57.078][38170][trace][connection] [source/common/network/connection_impl.cc:483] [C2] writing 179 bytes, end_stream false
[2023-12-19 12:36:57.078][38170][debug][connection] [source/common/network/connection_impl.cc:139] [C2] closing data_to_write=179 type=2
[2023-12-19 12:36:57.078][38170][debug][connection] [source/common/network/connection_impl_base.cc:47] [C2] setting delayed close timer with timeout 1000 ms
[2023-12-19 12:36:57.078][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C2] socket event: 2
[2023-12-19 12:36:57.078][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C2] write ready
[2023-12-19 12:36:57.078][38170][trace][connection] [source/common/network/raw_buffer_socket.cc:67] [C2] write returns: 179
[2023-12-19 12:36:57.078][38170][debug][connection] [source/common/network/connection_impl.cc:720] [C2] write flush complete
[2023-12-19 12:36:57.078][38170][trace][connection] [source/common/network/connection_impl.cc:568] [C2] socket event: 2
[2023-12-19 12:36:57.078][38170][trace][connection] [source/common/network/connection_impl.cc:679] [C2] write ready
[2023-12-19 12:36:57.078][38170][debug][connection] [source/common/network/connection_impl.cc:720] [C2] write flush complete
[2023-12-19 12:36:58.078][38170][debug][connection] [source/common/network/connection_impl_base.cc:69] [C2] triggered delayed close
[2023-12-19 12:36:58.078][38170][debug][connection] [source/common/network/connection_impl.cc:250] [C2] closing socket: 1
[2023-12-19 12:36:58.078][38170][trace][connection] [source/common/network/connection_impl.cc:423] [C2] raising connection event 1
I hope that you will be able to shed some light on this matter.
Thank you!
Hi,
My setup looks like this:
LB -> nginx (WAF) -> Envoy -> ServicesA/B/C
We have autoscaling enabled for Service A from 2-10 pods, I can see on graphs that over time the load on the 2 pods that stay running after scaling down gets ALOT higher in comparison to newely created pods.
All traffic in the cluster also goes through Envoy via ClusterIP service (kube-proxy iptables mode).
After deployment (or rollout restart) the traffic goes back to even spread. But after few days its back at those 2 pods doing 20% more work than any other.
I thought Envoy load balancing does not take to account issues like long running connections or iptables semi-round-robin issues. We dont have topology hints enabled for that env.
Do you have any guesses what might be the cause ?
Hey all,
I am currently trying to apply a filter that enforces `max_stream_duration` and `max_connection_duration` timeouts. The pods these need to apply to are sitting underneath a Kubernetes service. They are set to deny all traffic not coming from this service. I *believe* I have the filter configured correctly, but have been unable to get the desired responses.
For all tests below, i set the timeout to 1s, and then ensured that the total time the request took was longer. All three tests described below return a "200-success", even if they take 20+ seconds.
I have tried to use an envoyfilter set with `http.fault` to introduce some lag. This does not cause my filter to trigger
I have tried to use a virutalservice to cause a delay. This also does not work
I have tried to create a file that takes more than 1s to transfer, and then transfer the file. This doesn't work.
What is the recommended way to do this?
From the docs: https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/listener/v3/listener.proto
The limit is descibed as a `soft limit`. Does this mean that the total buffer size that can be used is 1MiB (assuming no override) and that if you receive something of more than 1MiB it will just...??? Succeed? What "limiting" is happening? What does a "soft limit" mean in terms of buffer space?
If you want to enforce a hard limit, what is the recommended way to do that?
Is it possbile by any manner to revoke JWTs by envoy? In my personal opinion JWTs should be short-lived an not revoked by an additional system since it increases comlpexity a lot.
Anyway I have the task to evaluate such a concept. To not create a dependency to another service I thought of using RabbitMQ to provide a queue which provides information about JWTs that should not longer be accepted.
Is it possible somehow to let envoy subscribe to this queue and cache these to-be-revoced tokens? If the subscription itself is not possible: Can I make envoy reject certain JWTs by something like filters or so?
Thanks in advance <3
I want to 301 to multiple endpoints, it's possible in nginx but no health checking is available. I tried others, caddy and HAProxy, they don't even provide multi endpoint redirect. Since envoy is extensively configurable, I don't have much experience in envoy, is that possible in envoy to redirect to multiple endpoints with health checking?
```
# nginx.conf
split_clients "${remote_addr}" $destination {
40% server1:port1;
30% server2:port2
20% server3:port3;
10% server4:port4
}
server {
listen myport;
location / {
return 302 http://$destination$request_uri;
}
}
```
I have to build Envoy TCP Proxy as load balancer to forward TCP packets (logs) from some systems to Splunk server.
I configured TCP proxy in envoy.yaml as below:
=====
static_resources:
listeners:
- name: listener_528tcp
reuse_port: true
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 528
listener_filters:
- name: envoy.filters.listener.proxy_protocol
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.listener.proxy_protocol.v3.ProxyProtocol
- name: envoy.filters.listener.original_src
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.listener.original_src.v3.OriginalSrc
filter_chains:
- filters:
- name: envoy.filters.network.tcp_proxy
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
stat_prefix: ingress_tcp528
cluster: 528_tcp
idle_timeout: 10s
per_connection_buffer_limit_bytes: 32768
=====
I use envoy-v1.21.1 to test configuration file and the result is OK, but when i start envoy process then push TCP packets to port 528 TCP of envoy proxy, it does not forward TCP packets to endpoints. I check endpoints by command "tcpdump -i ens224 tcp port 528 -vv" and don't see any TCP packets were forwarded from envoy proxy.
I try to delete "listener_filters" block and restart envoy proxy, and push TCP packets to port 528 TCP of envoy proxy then i check endpoints by command "tcpdump -i ens224 tcp port 528 -vv" and i can see TCP packets are sent to endpoints, but the log body contains the IP address of envoy proxy (is not remote/client IP address).
I think my listener_filters block has some configuration issues, but i can not find the reason.
Please help me to solve this case, thanks very much!!
Hello there!
I'm very new to Envoy and my question is: what is a proper way (if it exists at all) to enhance Envoy's functionality with a custom filter? I'm trying to follow Pull Requests that was opened to add a new filter and also looking to a structure of already existing filters. But after successful compiling and configuring Envoy doesn't load my filter and there's an error.. It isn't shown in the start log with another http filters as well. Can't figure out what do I miss.
Is there some detailed article/example/guide or another "how to" that could probably help?
Has anyone ever used STARTTLS extension for use with SMTP. I have a setup where envoy is handling TLS termination in front of postfix. Now I would like to also support STARTTLS, but could not get it to work. I’m wondering if the extension even is supposed to work with SMTP, because it would need to modify the server response for EHLO.
EDIT:
Answer is here https://github.com/envoyproxy/envoy/issues/19765#issuecomment-1031826343
Is it possible to queue messages in envoy when backend client is going through an update. Does envoy or any other proxy supports request queuing and replay?
Hi, I'm implementing a service that requires hot update without service downtime. I'm planning to use a proxy like envoy that supports load balancing. The high level idea is:
This is my first time designing such an architecture. So I need help to figure out few things,
Please help!
Hi I’m currently facing a problem I’ve been trying to solve for a few days. I’m a total beginner regarding envoy and Kubernetes (minikube). I’m trying to use envoy as a load balancer in a Kubernetes setup. Afaik deploying a web server as a service in Kubernetes should allow my envoy load balancer to automatically discover my web server pods and distribute load between them. My code is available at https://github.com/UDrache/kube_envoy_test Any help would be appreciated. Envoy config: https://github.com/UDrache/kube_envoy_test/blob/main/envoy/envoy.yaml
I have tried to roughly do as described in this post https://blog.markvincze.com/how-to-use-envoy-as-a-load-balancer-in-kubernetes/
When I curl envoy I get “no healthy upstream” as a response. I not entirely sure what this means but my guess is that envoy can’t reach my web server.
I’ll update the repo if I get it working for others to learn from.
Hello everyone. I'm trying to set up my envoy proxy to handle mTLS traffic, but in addition to the standard client certificate check I want to restrict calls to a client certificate AND a CIDR range (IP whitelist). I have basic mTLS working using a transport_socket as below, and now I'm trying to figure out the best way to handle the IP whitelisting. It looks like envoy.filters.network.client_ssl_auth would be perfect for that, but the documentation is not very clear on how to set it up and I'm also not certain that it will play nice with the transport socket I already have defined. Would this network filter take the place of the client cert auth in the transport socket, so that I would just have the server side TLS configs in transport_socket, and the client cert auth in the client_ssl_auth filter? Lastly, I'm not sure what the auth_api_cluster is meant to be, and it doesn't appear to be defined anywhere. Is that just a custom API server I'm meant to build that will serve the relevant REST APIs as defined here?
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
require_client_certificate: true
common_tls_context:
tls_params:
tls_minimum_protocol_version: TLSv1_2
tls_maximum_protocol_version: TLSv1_3
cipher_suites:
- ECDHE-ECDSA-AES128-GCM-SHA256
- ECDHE-RSA-AES128-GCM-SHA256
- ECDHE-ECDSA-AES128-SHA
- ECDHE-RSA-AES128-SHA
- AES128-GCM-SHA256
- AES128-SHA
- ECDHE-ECDSA-AES256-GCM-SHA384
- ECDHE-RSA-AES256-GCM-SHA384
- ECDHE-ECDSA-AES256-SHA
- ECDHE-RSA-AES256-SHA
- AES256-GCM-SHA384
- AES256-SHA
validation_context_sds_secret_config:
name: test_client
tls_certificate_sds_secret_configs:
- name: server_cert
Let say I do NOT run Kubernetes for my web app, the web backend is using Node Express and MySQL database. Can I use Envoy as front proxy to serve internet user, that upstream to the Node Express server?