httpx 1.7.0 has been released.
HTTPX.get("https://gitlab.com/honeyryderchuck/httpx
<https://gitlab.com/honeyryderchuck/httpx>")
HTTPX is an HTTP client library for the Ruby programming language.
Among its features, it supports:
* HTTP/2 and HTTP/1.x protocol versions
* Concurrent requests by default
* Simple and chainable API
* Proxy Support (HTTP(S), CONNECT tunnel, Socks4/4a/5)
* Simple Timeout System
* Lightweight by default (require what you need)
And also:
* Compression (gzip, deflate, brotli)
* Streaming Requests
* Authentication (Basic Auth, Digest Auth, AWS Sigv4)
* Expect 100-continue
* Multipart Requests
* Cookies
* HTTP/2 Server Push
* H2C Upgrade
* Automatic follow redirects
* International Domain Names
* GRPC
* Circuit breaker
* WebDAV
* SSRF Filter
* Response caching
* HTTP/2 bidirectional streaming
* QUERY HTTP verb
* Datadog integration
* Faraday integration
* Webmock integration
* Sentry integration
Here are the updates since the last release:
# 1.7.0
## Features
### All AUTH plugin improvements!!
#### `:auth`
The `:auth` plugin can now be used with a dynamic callable object (methods,
procs...) to generate the token.
# static token, pre 1.7.0
HTTPX.plugin(:auth).authorization("API-TOKEN")
# dynamically generate token!
HTTPX.plugin(:auth).authorization { generate_new_ephemeral_token }
The `.authorization` method is now syntactic sugar for a new option,
`:auth_header_value`, which can be used directly, alongside a
`:auth_header_type`:
HTTPX.plugin(:auth).authorization("API-TOKEN")
HTTPX.plugin(:auth).authorization { generate_new_ephemeral_token }
HTTPX.plugin(:auth).authorization("Bearer API-TOKEN")
# same as
HTTPX.plugin(:auth, auth_header_value: "API-TOKEN")
HTTPX.plugin(:auth, auth_header_value: -> { generate_new_ephemeral_token })
HTTPX.plugin(:auth, auth_header_type: "Bearer", auth_header_value:
"API-TOKEN")
A new option `:generate_auth_value_on_retry` (which can be passed a
callable receiving a response object) is now available; when used alongside
the `:retries` plugin, it'll use the callable passed to the
`.authorization` method to generate a new token before retrying the request:
authed = HTTPX.plugin(:retries).plugin(:auth, generate_auth_value_on_retry:
->(res) {
res.status == 401
}).authorization { generate_new_ephemeral_token }
authed.get("https://example.com")
Read more about it in the [auth plugin wiki](
Auth · honeyryder).
#### `:oauth`
The `:oauth` plugin implementation was revamped to make use of the `:auth`
plugin new functionality, in order to make managing an oauth session more
seamless.
Take the following example:
session = HTTPX.plugin(:oauth).with_oauth_options(
issuer: server.origin,
client_id: "CLIENT_ID",
client_secret: "SECRET",
)
session.get("https://example.com") #=> will load server metadata, request
an access token, and perform the request with the access token.
# 2 hours later...
session.get("https://example.com")
# it'll reuse the same acces token, and if the request fails with 401,
it'll request a new
# access token using the refresh token grant (when supported by the token
issuer), and
# reperform the original request with the new access token.
A new option, `:oauth_options`, is now available. The same parameters
previously supported by the `:oauth_session` options are supported.
The following components are therefore deprecated and scheduled for removal
in a future major version:
* `:oauth_session` option
* `.oauth_auth` session method
* `.with_access_token` session method
#### `:bearer_auth`, `:digest_auth`; `:ntlm_auth`
The `:auth` plugin is now the foundation of each of these plugins, which
haven't suffered major API changes.
Read more about it in the [auth plugin wiki](
Oauth · honeyryder).
### `:retries` plugin: `:retry_after` backoff algorithms
The `:retries` plugins supports two new possible values for the
`:retry_after` option: `:exponential_backoff` and `:polynomial_backoff`.
They'll implement the respective calculation per each retry of a given
request.
# will wait 1, 2, 4, 8, 16 seconds... depending of how many retries it can
wait for
session = HTTPX.plugin(:retries, retry_after: :exponential_backoff)
Read more about it in the [retries plugin wiki](
Retries · honeyryder).
### Ractor compatibility
`httpx` can be used within a ractor:
# ruby 4.0 syntax
response = Ractor.new(uri) do |uri|
HTTPX.get(uri)
end.value
Bear in mind that, if you're connection via HTTPS, you'll need make sure
you're using version 4.0 or higher of the `openssl` gem.
The test suite isn't exhaustive for ractors yet, but most plugins should
also be ractor-compatible. If they don't work, that's a bug, and you're
recommended to report it.
## Improvements
* When encoding the `:json` param to send it as `application/json` payload,
(example: `HTTPX.post("https://example.com", json: { foo: "bar })`), and
the method uses the `json` standard library, it'll use `JSON.generate`
(instead of `JSON.dump`) to encode the JSON payload. The reason is that,
unlike `JSON.dump`, it doesn't rely on access to a global mutable hash, and
is therefore ractor-safe.
* `:stream` plugin: the stream response class (the object that is returned
in request calls is a stream response) can be extended now. You can add a
`StreamResponseMethods` method to your plugin. Read more about it in the
documentation.
* The resolver name cache (used by the native and https resolvers) was
remade into a LRU cache, and will therefore not keep on growing when
`httpx` is used to connect to a huge number of hostnames in a process.
* the native and https DNS resolvers will ignore answers with SERVFAIL code
while there are retries left (some resolvers use such error code for rate
limiting).
* `:timeout` option values are now validated, and an error is raised when
passing an unrecognized timeout option (which is a good layer of protection
for typos).
* pool: try passing the scheduler to a thread waiting on a connection, to
avoid the current case where a connection may be
checked-in-then-immediately-out-after when doing multiple requests in a
loop, never giving a chance to others and potentially making the pool time
out.
* headers deep-freeze and dup.
## Bugfixes
* recover and close connection when an `IOError` is raised while waiting
for IO readiness (could cause busy loops during HTTP/2 termination
handshake).
* `:stream_bidi` plugin: improve thread-safety of buffer operations when
the session is used from multiple threads.
* `:stream_bidi` plugin: added missing methods to signal in order to comply
with the Selectable API (it was reported as raising `NoMethodError` under
certain conditions).
* `:stream_bidi` plugin: can support non-bidirectional stream requests
using the same session.
* `:stream` plugin: is now compatible with fiber scheduler engines (via the
`:fiber_concurrency` plugin).
* `:stream` plugin: make sure that stream long-running requests do not
share the same connection as regular threads.
* `:digest_auth` plugin: can now support qop values wrapped inside
parentheses in the `www-authenticate` header (i.e. `qop="('auth',)"`).
* https resolver: handle 3XX redirect responses in HTTP DNS queries.
* https resolver: do not close HTTP connections whhich are shared across
AAAA and A resolution paths when its in use by one of them.
* fix access to private method from `http-2` which was made public in more
recent versions, but not in older still-supported versions.
* fixed resolver log message using a "connection" label.
* `HTTPX::Response.copy_to` will explicitly close the response at the end;
given that the body file can be moved as a result, there is no guarantee
that the response is still usable, so might as well just close it
altogether.
* selector: avoid skipping persistent connections in the selector to
deactivate due to iterate-and-modify.
## Breaking Changes
### `:digest_auth` error
The main error class for the `:digest_auth` plugin has been moved to a
different location. If you were rescuing the
`HTTPX::Plugins::DigestAuth::DigestError` error, you should now point to
the `HTTPX::Authentication::Digest::Error`.
### `:stream` plugin: `build_request` should receive `stream: true` for
stream requests
In case you're building request objects before passing them to the session,
you're now forced to create them with the `:stream` option on:
session = HTTPX.plugin(:stream)
# before
req = session.build_request("GET", "https://example.com/stream")
session.request(req, stream: true)
# after
req = session.build_request("GET", "https://example.com/stream", stream:
true)
session.request(req)
Previous code may still work in a few cases, but it is not guaranteed to
work on all cases.
# 1.6.3
## Features
* allow redacting only headers, or only the body, when using `debug_redact:
:headers` or `debug_redact: :body` respectively.
## Improvements
* `system` resolver now works in a non-blocking manner, initiating the dns
query in a separate thread and waiting on the pipe after that (it was
blocking the main thread during resolution before).
* reduce allocation to a single shared option object when headers are
passed as a session-level option, like `HTTPX.with(headers:
headers).get(...)`
* favour using `String#replace` in buffer operations (instead of
"clean-then-append").
* using `Array#unshift` instead of `Array#concat` in order to ensure that
request ordering is respected in the face of an in-between error which
requires reconnect-and-resend.
* replaced more internal callback indirection with plain method calls.
## Bugfixes
* https: prevent modification of the ssl context object when performing a
reconnection.
* compression: do not return early if the decompression buffer yields an
empty string (more frequent under jruby 10).
* response cache: take query params into account when caching or retrieving
cached responses.
* response cache: do not decompress cached responses on body consumption
(the response bodies are cached in plaintext).
* native resolver: pick next timeout associated with the hostname being
resolved (and not the hostnames in the queue).
* pool: assume that, even when signalled that a connection is available,
context may be switched to a session which also checks the same connection
out, before it's able to pick it up; in such a case, start from the
beginning, until the pool timeout expires.
* session: forego bookkeeping when a connection is coalesced (instead,
allow it to be dropped).
* digest_auth: make sure that an array is sent back if the probe response
fails.
* alt-svc: when alt-svc handshake happens with more in-flight requests,
defer termination to when these requests are made.
* http2: fix use of unexisting var `ex` when processing the connection
closed callback.
* connection: fix potential session dereferencing, which allowed
connections to be used across sessions, therefore bypassing needed
synchronization and leading to the `undefined method 'after' for
nil:NilClass` error.
* selector: close only selected connections (instead of all selectable
connections) when an error occurs during IO readiness wait calls.
* resolvers: correctly propagate abrupt termination errors to the
connection objects waiting for the answer.
* resolvers: when errors happenm force-close unresolved connections (and
ensure they're both pinned to the corresponding session before the error
happens, and are unpinned after error is propagated).
* resolvers: ensure resolvers transition to "closed" state, on all cases,
when any error happens.
* resolvers: ensure that the next hostname is resolved when a timeout
happens on the current one.
* native resolver: fixed duplication of the hostname to resolve in the list
of candidates.
* https resolver: use a `system` resolver to resolve the DoH server
hostname (instead of rerouting it to itself).
* https resolver: skip loop error reporting when error happens outside of
it.
* https resolver: close connection on resolve errors, which prevents it
from being around in the pool after termination; also deactivate it after
successful use.
* multi resolver: do not check resolvers back into the pool if it's a multi
resolver and the peer is still resolving (and do the check outside of the
critical area).
* sentry adapter: removed usage of deprecated method which has been removed
in sentry-ruby 6.0.0.
* selector: when coalescing connections, pin the current session before
merging connections, to prevent it from registering in a selector being
used in a different thread, and inadvertedly allowing it to be used across
threads.
* session: fix: always pin connection before early-or-lazy resolution
(fixes connection pool accounting under connection coalescing).
## Chores
* logging emits a timestamp as well (to monitor timeouts).
* `:stream_bidi` plugin: extends HTTP2 module by using plugin extensions.
* connection: remove session/selector references when closing a connection
(prevents leaking them beyond the usage scope).
# 1.6.2
## Bugfixes
* revert of behaviour introduced in 1.6.1 around handling of `:ip_families`
for name resolution.
* when no option is passed, do not assume no IPv6 connectivity if no
available non-local IP is found, as the local network may still be
reachable under `[::]`.
* bail out connection if the tcp connection was established but the ssl
session failed.
* when alpn negotiation succeeded, this could still initialize the HTTP/2
connection and put bytes in the buffer, which would cause a busy loop
trying to write to a non-open socket.
* datadog: fix initialization of spans in non-connection related errors
* past code was relying on the error being around DNS, but other errors
could pop; the fix was moving the init time setup early to the session,
when a request is first passed ot the associated connection.
* it can also fail earlier, so provide a workaround for that as well.
# 1.6.1
## Improvements
* `:oauth` plugin: `.oauth_session` can be called with an `:audience`
parameter, which has the effect of adding it as an extra form body
parameter of the token request.
## Bugfixes
* options: when freezing the options, skip freezing `:debug`; it's usually
a file/IO/stream object (stdout, stderr...), which makes it error when log
messages are written.
* tcp: fixed adding IPv6 addresses to a tcp object when IPv4 connection
probe is ongoing so that the next try uses the first ipv6 address.
* tcp: reorder addresses on reconnection, so ipv6 is tried first in case it
is still valid.
* tcp: make sure ip index is decremented on error, so the next tried IP may
be a valid one.
* tcp: do not reattempt connecting if there are no available addresses to
connect. This may happen in a fiber-aware context, where fiber A waits on
connection, fiber B reconnects as a result on an error or GOAWAY frame and
waits on the resolver DNS answer, and when context is passed back to fiber
B, it should go back to the invalidate the response and try again while
waiting on the resolver as well.
* ssl: on connection coalescing, do not merge the ssl sessions, as these
are frozen post-initialization.
* http2: all received GOAWAY frames emit goaway error and teardown the
connection independent of the error code (it was only doing it for
`:noerror`, but others may appear).
* do not check at require time whether the network is multi-homed; instead,
defer it to first use and cache (this can break environments which block
access to certain syscalls during boot time).
* options: do not ignore when user sets `:ip_families` in name resolution.