Subscriptions
Hive Gateway fully supports federated subscriptions and behaves just like Federation GraphQL subscriptions in Apollo Router.
Subgraphs providing subscriptions can communicate with Hive Gateway through one of the following protocols:
Clients connecting to the Hive Gateway may use either:
When to use subscriptions
GraphQL subscriptions allows to keep the client updated in real time.
Most of the time, a PubSub system is used to propagate events in the backend system. A client can use subscriptions to receive those events, augmented with all the data it needs using the GraphQL ability to resolve additional fields.
Subscriptions can be used for applications that relies on events or live data, such as chats, IoT sensors, alerting, stock prices, etc…
Learn more about Subscriptions
Subscriptions in Gateways
In the context of a gateway, subscriptions are forwarded from the client to the subgraph implementing the subscribed field.
With the power of the Gateway, each events received from the upstream subgraph will be augmented with the requested data from other subgraphs, and then sent to the client.
The Hive Gateway also abstract away the underlying protocol used to transport the data. A client can use a different transport than the one used to connect with the upstream subgraph.
Configure subgraph transport
By default, Hive Gateway will always try to use the same transport for queries, mutations and subscriptions.
In the case of HTTP, the default is to protocol is GraphQL over SSE. We highly recommend it, since it’s the most performant and idiomatic.
If your subgraph doesn’t implement subscriptions over SSE, you can configure Hive Gateway to use GraphQL over WebSocket or HTTP Callback.
Whichever protocol is used by Hive Gateway to subscribe to the upstream subgraphs, downstream clients can subscribe to the gateway using any supported protocol.
Subscriptions using WebSockets
If your subgraph uses WebSockets for subscriptions support (like with Apollo Server), Hive Gateway will need additional configuration pointing to the WebSocket server path on the subgraph.
Please note that WebSocket for communications between Hive Gateway and subgraphs are suboptimal compared to other possible transports. We recommend using either SSE or HTTP Callbacks instead.
import { defineConfig, type WSTransportOptions } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
supergraph: 'supergraph.graphql',
transportEntries: {
// use "*.http" to apply options to all subgraphs with HTTP
'*.http': {
options: {
subscriptions: {
kind: 'ws',
// override the path if it is different than normal http
location: '/subscriptions'
}
}
}
}
})
Subscriptions using HTTP Callback
If your subgraph uses HTTP Callback protocol for subscriptions, Hive Gateway will need additional configuration.
import { defineConfig, type HTTPCallbackTransportOptions } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
supergraph: 'supergraph.graphql',
// Setup Hive Gateway to listen for webhook callbacks, and emit the payloads through PubSub engine
webhooks: true,
transportEntries: {
// use "*.http" to apply options to all subgraphs with HTTP
'*.http': {
options: {
subscriptions: {
kind: 'http-callback',
options: {
// The gateway's public URL, which your subgraphs access, must include the path configured on the gateway.
public_url: 'http://localhost:4000/callback',
// The path of the router's callback endpoint
path: '/callback',
// Heartbeat interval to make sure the subgraph is still alive, and avoid hanging requests
heartbeat_interval: 5000
} satisfies HTTPCallbackTransportOptions
}
}
}
}
})
Subscriptions using mixed protocols
Hive Gateway supports using different transport for different subgraphs. By default, subscriptions
will use the same transport than queries and mutation. This can be change using the
transportEntries
option.
The key of each entry determine which subgraph will be impacted:
*
: all subgraphs*.{transportKind}
: all subgraphs usingtransportKind
. For example,*.http
will impact all subgraph using thehttp
transport.{subgraphName}
: a specific subgraph.
Configuration are inherited and merged from the least specific to the most specific matcher. Only
exception is the headers
which is not inherited for the ws
transport.
For example, let be 4 subgraphs:
- products: using
http
transport for queries, and HTTP callbacks for subscriptions - views: using
http
transport for queries, and WS for subscriptions - stocks: using
http
transport for queries, and WS for subscriptions - stores: using
mysql
transport
import { defineConfig, type WSTransportOptions } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
transportEntries: {
'*.http': {
// Will be applied to products, views and stocks subgraphs, but not stores.
options: {
subscriptions: {
kind: 'ws',
options: {
connectionParams: {
token: '{context.headers.authorization}'
}
} satisfies WSTransportOptions
}
}
},
products: {
// Will override the subscriptions configuration for products subgraph only
options: {
subscriptions: {
kind: 'http-callback',
location: '/subscriptions',
headers: [['authorization', 'context.headers.authorization']]
}
}
}
}
})
Propagation of authentication and headers
Hive Gateway can propagate the downstream client’s Authorization
header (or any other header) to
the upstream subgraph.
The propagation of headers is different if you use pure HTTP transports (SSE or HTTP Callbacks) or WebSockets.
Propagation of headers for pure HTTP subscription transports follow the same configuration than normal upstream requests.
import { defineConfig } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
propagateHeaders: {
fromClientToSubgraphs({ request }) {
return {
authorization: request.headers.get('authorization')
}
}
}
})
Configure Client subscriptions
Client subscriptions are enabled by default and compatible with both
GraphQL over SSE
and GraphQL over WebSocket with graphql-ws
.
The default endpoint for subscriptions is /graphql
and follow the graphqlEndpoint
option, as for
queries and mutations.
You can disable WebSockets server by using disableWebsockets
option in the config file or by
providing --disable-websockets
option to the hive-gateway
CLI.
Closing active subscriptions on schema change
When the schema changes in Hive Gateway, all active subscriptions will be completed after emitting the following execution error:
{
"errors": [
{
"message": "subscription has been closed due to a schema reload",
"extensions": {
"code": "SUBSCRIPTION_SCHEMA_RELOAD"
}
}
]
}
This is also what Apollo Router when terminating subscriptions on schema update.
Example
We’ll implement two GraphQL Yoga federation services behaving as subgraphs. The “products” service exposes a subscription operation type for subscribing to product changes, while the “reviews” service simply exposes review stats about products.
The example is somewhat similar to Apollo’s documentation, except for that we use GraphQL Yoga here and significantly reduce the setup requirements.
You can find this example source on GitHub.
Install dependencies
npm i graphql-yoga @apollo/subgraph graphql
Services
In this example, we will compose 2 services:
Products
which contains the products dataReviews
which contains reviews of products
Those 2 services needs to be run in parallel of the gateway.
import { createServer } from 'http'
import { parse } from 'graphql'
import { createYoga } from 'graphql-yoga'
import { buildSubgraphSchema } from '@apollo/subgraph'
import { resolvers } from './my-resolvers'
const typeDefs = parse(/* GraphQL */ `
type Product @key(fields: "id") {
id: ID!
name: String!
price: Int!
}
type Subscription {
productPriceChanged: Product!
}
`)
const yoga = createYoga({ schema: buildSubgraphSchema([{ typeDefs, resolvers }]) })
const server = createServer(yoga)
server.listen(40001, () => {
console.log('Products subgraph ready at http://localhost:40001')
})
Supergraph
Once all services have been started, we can generate a supergraph schema. It will then be served by the Hive Gateway.
You can generate this schema with either GraphQL Mesh or Apollo Rover.
To generate a supergraph with Apollo Rover, you first need to create a configuration file describing the list of subgraphs:
federation_version: =2.3.2
subgraphs:
products:
routing_url: http://localhost:40001
schema:
subgraph_url: http://localhost:40001
inventory:
routing_url: http://localhost:40002
schema:
subgraph_url: http://localhost:40002
You can then run the Rover command to generate the supergraph schema SDL:
rover supergraph compose --config ./supergraph.yaml > supergraph.graphql
For more details about how to use Apollo Rover, please refer to the official documentation.
Start Gateway
You can now start the Hive Gateway. Without any configuration provided, the Gateway will load the
supergraph file supergraph.yaml
from the current directory, and serve it with a set of sensible
default features enabled.
hive-gateway supergraph
Subscribe
By default, subscriptions are enabled and handles both WebSockets and SSE transport.
Let’s now subscribe to the product price changes by executing the following query:
subscription {
productPriceChanged {
# Defined in Products subgraph
name
price
reviews {
# Defined in Reviews subgraph
score
}
}
}
Hive Gateway will inteligently resolve all fields on subscription events and deliver you the complete result.
You can subscribe to the gateway through Server-Sent Events (SSE) (in JavaScript, using EventSource or graphql-sse).
Most clients offers a way to use subscriptions over SSE. You can find here examples for Apollo
Client and Relay, please refer to the
Recipes for Clients Usage section of graphql-sse
documentation
for other clients setups.
To quickly test subscriptions, you can use curl
in your terminal to subscribe to the gateway.
curl
has native support of SSE.
curl 'http://localhost:4000/graphql' \
-H 'accept: text/event-stream' \
-H 'content-type: application/json' \
--data-raw '{"query":"subscription OnProductPriceChanged { productPriceChanged { name price reviews { score } } }","operationName":"OnProductPriceChanged"}'
Event-Driven Federated Subscriptions (EDFS)
Hive Gateway supports event-driven federated subscriptions, allowing you to publish events to a message broker (NATS, Kafka, Redis, etc.) and have those events automatically routed to the appropriate Hive Gateway subscribers.
If you do not know what Event-Driven Federated Subscriptions (EDFS) are, please refer to this great article by Wundergraph.
Lets go over how you would set up EDFS with Mesh Compose and Hive Gateway using Redis as the message broker.
Composing the Supergraph With Mesh Compose
Lets compose our supergraph with Mesh Compose and add the subscription fields to the schema.
First we need to make sure we have a “products” subgraph ready and running on
http://localhost:3000/graphql
:
type Query {
hello: String!
}
type Product @key(fields: "id") {
id: ID!
name: String!
price: Float!
}
And then we need to add the subscription fields like this:
import { defineConfig, loadGraphQLHTTPSubgraph } from '@graphql-mesh/compose-cli'
export const composeConfig = defineConfig({
subgraphs: [
{
sourceHandler: loadGraphQLHTTPSubgraph('products', {
endpoint: `http://localhost:3000/graphql`
})
}
],
additionalTypeDefs: /* GraphQL */ `
extend schema {
subscription: Subscription
}
type Subscription {
newProduct: Product! @resolveTo(pubsubTopic: "new_product")
}
`
})
The composed supergraph schema will now contain a newProduct
subscription field that will have the
gateway subscribe to the new_product
topic. This is done by the @resolveTo
directive where the
pubsubTopic
argument specifies the topic to subscribe to.
Hive Gateway will intelligently detect the best subgraph to resolve the Product
from by looking at
the subscription event data.
Configuring Hive Gateway With Redis PubSub
Next step is to configure Hive Gateway to use Redis PubSub as the message broker and consume the Mesh Compose generated supergraph. This is how the configuration would look like:
Redis PubSub does not come with Hive Gateway, you have to install the package and the Redis PubSub
peer dependency of ioredis
which you need to install first:
npm i @graphql-hive/pubsub ioredis
import Redis from 'ioredis'
import { defineConfig } from '@graphql-hive/gateway'
import { RedisPubSub } from '@graphql-hive/pubsub/redis'
/**
* When a Redis connection enters "subscriber mode" (after calling SUBSCRIBE), it can only execute
* subscriber commands (SUBSCRIBE, UNSUBSCRIBE, etc.). Meaning, it cannot execute other commands like PUBLISH.
* To avoid this, we use two separate Redis clients: one for publishing and one for subscribing.
*/
const pub = new Redis()
const sub = new Redis()
export const gatewayConfig = defineConfig({
supergraph: 'supergraph.graphql', // the supergraph generated by Mesh Compose
pubsub: new RedisPubSub(
{ pub, sub },
{
// we make sure to use the same prefix for all gateways to share the same channels and pubsub
// meaning, all gateways using this channel prefix will receive and publish to the same topics
channelPrefix: 'edfs'
}
)
})
Subscribing and Publishing Events
We’re now ready to subscribe to the newProduct
subscription field and publish events to the
new_product
topic. The publishing of events can happen from anywhere, it doesn’t have to be
from within Hive Gateway or any perticular subgraph, you can, for example, implement a separate
service that is only responsible for emitting subscription events.
You can subscribe to the newProduct
subscription from a client using any of the
transports supported by Hive Gateway, lets subscribe with this
query:
subscription {
newProduct {
name
price
}
}
and then emit an event to the Redis instance on the new_product
topic with the edfs
prefix like
this:
PUBLISH edfs:new_product '{"id":"roomba70x"}'
The subscriber will then receive the following event:
{
"data": {
"newProduct": {
"name": "Roomba 70x",
"price": 279.99
}
}
}
Note that the event payload only contains the id
field, which is the only required field to
resolve the Product
type. Hive Gateway will then fetch the missing fields from the “products”
subgraph.
PubSub
Hive Gateway internally uses a PubSub system to handle subscriptions. By default, an in-memory PubSub engine is used when detecting subscriptions.
You can implement your own PubSub engine by implementing the PubSub
interface from the
@graphql-hive/pubsub
package, which looks like this:
export type TopicDataMap = { [topic: string]: any /* data */ }
export type PubSubListener<Data extends TopicDataMap, Topic extends keyof Data> = (
data: Data[Topic]
) => void
type MaybePromise<T> = T | Promise<T>
export interface PubSub<M extends TopicDataMap = TopicDataMap> {
/**
* Publish {@link data} for a {@link topic}.
* @returns `void` or a `Promise` that resolves when the data has been successfully published
*/
publish<Topic extends keyof M>(topic: Topic, data: M[Topic]): MaybePromise<void>
/**
* A distinct list of all topics that are currently subscribed to.
* Can be a promise to accomodate distributed systems where subscribers exist on other
* locations and we need to know about all of them.
*/
subscribedTopics(): MaybePromise<Iterable<keyof M>>
/**
* Subscribe and listen to a {@link topic} receiving its data.
*
* If the {@link listener} is provided, it will be called whenever data is emitted for the {@link topic},
*
* @returns an unsubscribe function or a `Promise<unsubscribe function>` that resolves when the subscription is successfully established. the unsubscribe function returns `void` or a `Promise` that resolves on successful unsubscribe and subscription cleanup
*
* If the {@link listener} is not provided,
*
* @returns an `AsyncIterable` that yields data for the given {@link topic}
*/
subscribe<Topic extends keyof M>(topic: Topic): AsyncIterable<M[Topic]>
subscribe<Topic extends keyof M>(
topic: Topic,
listener: PubSubListener<M, Topic>
): MaybePromise<() => MaybePromise<void>>
/**
* Closes active subscriptions and disposes of all resources. Publishing and subscribing after disposal
* is not possible and will throw an error if attempted.
*/
dispose(): MaybePromise<void>
/** @see {@link dispose} */
[Symbol.asyncDispose](): Promise<void>
}
The @grpahql-hive/pubsub
package also provides a few built-in PubSub engines, at the moment an
in-memory engine and a Redis-based engine.
In-Memory PubSub
The in-memory PubSub engine is the default engine used when subscriptions are detected. It can also be used explicitly by setting it in the configuration.
import { defineConfig } from '@graphql-hive/gateway'
import { MemPubSub } from '@graphql-hive/pubsub'
// or from the Hive Gateway package
import { MemPubSub } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
supergraph: 'supergraph.graphql',
pubsub: new MemPubSub()
})
Redis PubSub
For more advanced use-cases, such as running multiple instances of Hive Gateway, you can use the Redis-based PubSub engine we offer out of the box.
In case you have distributed instances of Hive Gateway, using a distributed PubSub engine is required to make sure all instances are aware of all active subscriptions and can publish events to the correct subscribers.
For example, when using the webhooks transport for subscriptions, the subgraph will send events to only one instance of Hive Gateway. If that instance doesn’t have any active subscription for the topic, the event will be lost. Using a distributed PubSub engine solves this problem.
Redis PubSub does not come with Hive Gateway, you have to install the package and the Redis PubSub
peer dependency of ioredis
which you need to install first:
npm i @graphql-hive/pubsub ioredis
import Redis from 'ioredis'
import { defineConfig } from '@graphql-hive/gateway'
import { RedisPubSub } from '@graphql-hive/pubsub/redis'
/**
* When a Redis connection enters "subscriber mode" (after calling SUBSCRIBE), it can only execute
* subscriber commands (SUBSCRIBE, UNSUBSCRIBE, etc.). Meaning, it cannot execute other commands like PUBLISH.
* To avoid this, we use two separate Redis clients: one for publishing and one for subscribing.
*/
const pub = new Redis()
const sub = new Redis()
export const gatewayConfig = defineConfig({
webhooks: true,
pubsub: new RedisPubSub(
{ pub, sub },
{
// we make sure to use the same prefix for all gateways to share the same channels and pubsub
// meaning, all gateways using this channel prefix will receive and publish to the same topics
channelPrefix: 'my-shared-gateways'
}
)
})
Now, with this setup, any instance of Hive Gateway using the same channelPrefix
will be able to
share the same subscriptions.
Note that this works only if you have subgraphs that publish subscriptions to Hive Gateway’s webhook to the same pubsub topic (see documentation on using webhooks to handle subscriptions).
You should also take a look at the E2E test serving as an example of how distributed subscriptions would work with multiple Hive Gateway instances.