Isomorphic TypeScript APIs: End-to-end type-safety between client & server
Full-Stack development just reached a whole new level of productivity. Isomorphic TypeScript APIs, as I call them, blur the lines between client and server. Without any compile time step, the developer receives immediate feedback when they make a change to the API. You can easily jump between the client and server code because, after all, it's the same code.
Here's a short video to illustrate the experience:
The video shows the API definition of a mutation on the left tab and the client implementation to call the mutation on the right tab. When we're changing an input field name or type of mutation, the client code will immediately reflect the changes and show us the errors in the IDE. This immediate feedback loop is a game changer for full-stack development.
Let's talk about the history of Isomorphic TypeScript APIs, how they work, the benefits and the drawbacks, and how you can get started with them.
Isomorphic TypeScript APIs: What exactly does the term mean and where does it come from?
There are a few frameworks that allow you to define your API in TypeScript and share the code between the client and the server; the most popular one being trpc.
There are different approaches to achieving this kind of feedback loop, code generation, and type inference. We'll talk about the differences and the pros and cons of each approach. You'll see that type inference is much better during development but has drawbacks when it comes to sharing types across repositories.
The term "Isomorphic TypeScript APIs" is inspired by the JavaScript and React.js community. In the React community, the term "isomorphic" or "universal" is used to describe a React application that uses the same code to render on the server and the client, which I think is quite similar to what we're doing here with APIs.
Isomorphic TypeScript APIs allow you to define the API contract on the server and infer the client code from it through type inference. Consequently, we don't have to go through a code generation step to get a type-safe client; we use the TypeScript compiler to infer the client code immediately during development instead.
The magic behind Isomorphic TypeScript APIs
Let's take a look at how Isomorphic TypeScript APIs work. How is it possible to share the same code between the client and the server? Wouldn't that mean that the client would have to import the server code?
The magic behind Isomorphic TypeScript APIs is the import type
statement of TypeScript. TypeScript 3.8 introduced Type-Only Imports and Export .
Type-only imports and exports are a new form of import and export. They can be used to import or export types from a module without importing or exporting any values.
This means that we're able to import types from a module; and therefore, share types between the client and the server without importing the server code itself. That's the crucial part of Isomorphic TypeScript APIs.
Now, let's take a deep dive into how we can put this knowledge into practice.
1. Define the API contract on the server
// .wundergraph/operations/users/get.ts
export default createOperation.query({
input: z.object({
id: z.string(),
}),
handler: async ({input}) => {
return {
id: input.id,
name: 'Jens',
bio: 'Founder of WunderGraph',
};
},
});
WunderGraph uses file-based routing similar to Next.js. By creating the file get.ts
in the .wundergraph/operations/users
folder, we're registering this operation on the /users/get
route.
We could now call this operation using curl:
curl http://localhost:9991/operations/users/get?id=123
That's great if we're not using TypeScript, but the whole point of this post is to use TypeScript. So let's take a look at how createOperation.query
is defined.
2. Exposing types from the API definition
//
const createQuery = <IC extends InternalClient, UserRole extends string>() => <I extends z.AnyZodObject, R>(
{
input,
handler,
live,
requireAuthentication = false,
internal = false,
rbac,
}: {
input?: I;
handler: (ctx: HandlerContext<I, IC, UserRole>) => Promise<R>;
live?: LiveQueryConfig;
} & BaseOperationConfiguration<UserRole>): NodeJSOperation<z.infer<I>, R, 'query', IC, UserRole> => {
return {
type: 'query',
inputSchema: input,
queryHandler: handler,
internal: internal || false,
requireAuthentication: requireAuthentication,
rbac: {
denyMatchAll: rbac?.denyMatchAll || [],
denyMatchAny: rbac?.denyMatchAny || [],
requireMatchAll: rbac?.requireMatchAll || [],
requireMatchAny: rbac?.requireMatchAny || [],
},
liveQuery: {
enable: live?.enable || true,
pollingIntervalSeconds: live?.pollingIntervalSeconds || 5,
},
};
};
export type HandlerContext<I, IC extends InternalClient, Role extends string> = I extends z.AnyZodObject
? _HandlerContext<z.infer<I>, IC, Role>
: Omit<_HandlerContext<never, IC, Role>, 'input'>;
export type NodeJSOperation<Input, Response, OperationType extends OperationTypes, IC extends InternalClient, UserRole extends string> = {
type: OperationType;
inputSchema?: z.ZodObject<any>;
queryHandler?: (ctx: HandlerContext<Input, IC, UserRole>) => Promise<Response>;
mutationHandler?: (ctx: HandlerContext<Input, IC, UserRole>) => Promise<Response>;
subscriptionHandler?: SubscriptionHandler<Input, Response, IC, UserRole>;
requireAuthentication?: boolean;
internal: boolean;
liveQuery: {
enable: boolean;
pollingIntervalSeconds: number;
};
rbac: {
requireMatchAll: string[];
requireMatchAny: string[];
denyMatchAll: string[];
denyMatchAny: string[];
};
};
It's a lot of code to unpack, so let's go through it step by step.
The createQuery
function is a factory that returns the createOperation.query
function. By wrapping the actual function into a factory, we're able to pass generic types like InternalClient (IC) and UserRole to the function. This allows us to inject generated types without complicating the API for the user.
What's important to note are the two generic arguments of the createQuery
function: I extends z.AnyZodObject, R
. I
is the input type, and R
is the response type.
The user can pass an input definition to the createOperation.query
function as seen in step 1. Once this value is passed to the createQuery
function, the I
generic type is inferred from the input definition. This enables the following:
We can use
z.infer<I>
to infer the input type from the input definitionThis inferred type is used to make the
handler
function type-safeAdditionally, we set the inferred type as the
Input
generic type of theNodeJSOperation
type
We can later use import type
to import the Input
type from the NodeJSOperation
.
What's missing is the Response
type R
, which is less complicated than the Input
type. The second generic argument of the createQuery
function is R
(the response type). If you look closely at the handler
argument definition, you'll see that it's a function that returns a Promise<R>
. So, whatever we're returning from the handler
function is the Response
type. We simply pass R
as the second generic argument to the NodeJSOperation
type, and we're done.
Now we've got a NodeJSOperation
type with two generic arguments, Input
and Response
. The rest of the code ensures that the internal client and user object are type-safe but ergonomic; for example, omitting the input
property if the user didn't pass an input definition.
3. Exposing the API contract on the client
Finally, we need a way to import the API contract on the client. We're using a bit of code generation when creating the models for the client to make this a pleasant developer experience.
Keep in mind that the NodeJSOperation
type is a generic with the Input
and Response
types as generic arguments, so we need a way to extract them to make our client models type-safe.
Here's a helper function to achieve this using the infer
keyword:
export type ExtractInput<B> = B extends NodeJSOperation<infer T, any, any, any, any> ? T : never;
export type ExtractResponse<B> = B extends NodeJSOperation<any, infer T, any, any, any> ? T : never;
The infer
keyword allows us to extract a generic argument from a generic at a specific position. In this case, we're extracting the Input
and Response
types from the NodeJSOperation
type.
Here's an excerpt from the client models file that uses this helper function:
import type function_UsersGet from "../operations/users/get";
import type { ExtractInput, ExtractResponse } from "@wundergraph/sdk/operations";
export type UsersGetInput = ExtractInput<typeof function_UsersGet>;
export type UsersGetResponseData = ExtractResponse<typeof function_UsersGet>;
export interface UsersGetResponse {
data?: UsersGetResponseData;
errors?: ReadonlyArray<GraphQLError>;
}
N*otice how we're only importing the* function_UsersGet
type from the operations file, not the actual implementation. At compile time, all the type imports are removed.
There's one more nugget here that you might easily miss: the generated client models export the UsersGetInput
type, which is inferred from the function_UsersGet
type, which is the type export of the NodeJSOperation
type, which infers its Input
type from the createOperation.query
function.
This means that there's a chain of type inference happening here. This doesn't just make the client models type-safe but also enables another very powerful feature that I think is very important to highlight.
Inferring clients from the server API contract definition enables refactoring of the server API contract without breaking the client.
Let's add some client code to illustrate this:
import { useQuery, withWunderGraph } from '../../components/generated/nextjs';
const Users = () => {
const { data } = useQuery({
operationName: 'users/get',
input: {
id: '1',
},
});
return (
<div style={{ color: 'white' }}>
<div>{data?.id}</div>
<div>{data?.name}</div>
<div>{data?.bio}</div>
</div>
);
};
export default withWunderGraph(Users);
This is the generated client code for the users/get
query. If we're setting the operationName
to users/get
(which, by the way, is a type-safe string), we're forced to pass an input
object that matches the UsersGetInput
type.
If we'd now refactor the id
property to userId
in the server API contract, the client code will also be refactored to userId
because the UsersGetInput
type is inferred from the server API contract. If, instead, we'd change the type of the id
property from string
to number
, the IDE will immediately show an error because the inferred type of the id
field (number) wouldn't match the string anymore.
This kind of immediate feedback loop is what makes this approach so powerful. If you've previously worked with REST or GraphQL APIs, you'll know that refactoring the API contract would involve many more steps.
The different types of Operations available in WunderGraph
WunderGraph supports three different types of TypeScript operations: queries, mutations, and subscriptions. Let's have a look at how you can define them.
Isomorphic TypeScript APIs: Queries
We've seen a Query Operation above, but I still want to list all three types of operations here for completeness.
export default createOperation.query({
input: z.object({
id: z.string(),
}),
handler: async ({ input }) => {
return {
id: input.id,
name: 'Jens',
bio: 'Founder of WunderGraph',
};
},
});
A query operation will be registered as a GET
request handler on the server. By defining an input
definition, the input
argument of the handler
function will be type-safe. Furthermore, we're also creating a JSON-Schema validation middleware for the endpoint.
Other options we'd be able to configure are rbac
, for role-based access control; requireAuthentication
, to require authentication for the endpoint; live
, to configure live queries (enabled by default); and internal
, to make this endpoint only available to other operations, not the client.
Once you enable authentication, you'll also be able to use the user
property of the handler
function argument:
export default createOperation.query({
requireAuthentication: true,
handler: async ({ input, user }) => {
return db.findUser(user.email);
},
});
This operation will return the user object from the database, using the email claim from the JWT token / cookie auth header as the identifier.
Isomorphic TypeScript APIs: Mutations
Next, let's take a look at a mutation operation:
export default createOperation.mutation({
input: z.object({
id: z.number(),
name: z.string(),
bio: z.string(),
}),
handler: async ({ input }) => {
return {
...input,
};
},
});
A mutation operation will be registered as a POST
request handler on the server. We're accepting three properties and return them as-is. In the handler, we would usually do some database operations here.
Isomorphic TypeScript APIs: Subscriptions
Finally, let's define a subscription operation:
export default createOperation.subscription({
input: z.object({
id: z.string(),
}),
handler: async function* ({ input }) {
try {
// setup your subscription here, e.g. connect to a queue / stream
for (let i = 0; i < 10; i++) {
yield {
id: input.id,
name: 'Jens',
bio: 'Founder of WunderGraph',
time: new Date().toISOString(),
};
// let's fake some delay
await new Promise((resolve) => setTimeout(resolve, 1000));
}
} finally {
// finally gets called, when the client disconnects
// you can use it to clean up the queue / stream connection
console.log('client disconnected');
}
},
});
A subscription operation will be registered as a GET
request handler on the server, which you can curl from the command line, or consume via SSE (Server-Sent Events) from the client if you're appending the query parameter ?wg_sse
.
The handler
function looks a bit different from the other two operations because it's an async generator function.
Instead of returning a single value, we use the yield
keyword to return a stream of values. Async generators allow us to create streams without having to deal with callbacks or promises.
One thing you might have wondered about is how to handle the client disconnecting. Async generators allow you to create a try
/ finally
block.
Once the client disconnects from the subscription, we're internally calling the return
function of the generator, which will call the finally
block. Consequently, you can start your subscription and clean it up in the same function without using callbacks or promises. I think the async generator syntax is an incredibly ergonomic way to create asynchronous streams of data.
Bridging the gap between GraphQL, REST and TypeScript Operations
If you're familiar with GraphQL, you might have noticed that there's some overlap in terminology between GraphQL and Isomorphic TypeScript Apis. This is no coincidence.
First of all, we're calling everything an Operation, which is a common term in GraphQL. Secondly, we're calling read operations Queries, write operations Mutations, and streaming operations Subscriptions.
All of this is intentional because WunderGraph offers interoperability between GraphQL, REST, and Isomorphic TypeScript APIs. Instead of creating a .wundergraph/operations/users/get.ts
file, we could have also created a .wundergraph/operations/users/get.graphql
file.
query UsersGet($id: String!) { users_user(id: $id) { id name bio } }
Given that we've added a users
GraphQL API to our Virtual Graph, this GraphQL query would be callable from the client as if it were a TypeScript Operation. Both GraphQL and TypeScript Operations are exposed in the exact same way to the client. For the client, it makes no difference if the implementation of an operation is written in TypeScript or GraphQL.
You can mix and match GraphQL and TypeScript Operations as you see fit. If a simple GraphQL Query is enough for your use case, you can use that. If you need more complex logic, like mapping a response, or calling multiple APIs, you can use a TypeScript Operation.
Additionally, we're not just registering GraphQL and TypeScript Operations as RPC endpoints, we're also allowing you to use the file system to give your operations a structure. As we're also generating a Postman Collection for your API, you can easily share this API with your team or another company.
Calling other Operations from an Operation
It's important to note that you get type-safe access to other operations from within your TypeScript Operations handlers through the context object:
export default createOperation.query({
input: z.object({
code: z.string(),
}),
handler: async (ctx) => {
const country = await ctx.internalClient.queries.Country({
input: {
code: ctx.input.code,
},
});
const weather = await ctx.internalClient.queries.Weather({
input: {
city: country.data?.countries_country?.capital || '',
},
});
return {
country: country.data?.countries_country,
weather: weather.data?.weather_getCityByName?.weather,
};
},
});
In this example, we're using the internalClient
to call the Country
and Weather
operations and combine the results. You might remember how we passed IC extends InternalClient
to the createOperation
factory in the beginning of this article. That's how we're making the internalClient
type-safe.
Learning from the Past: A Summary of Preceding Work in the Field
We're not the first ones to use these techniques, so I think it's important to give credit where credit is due and explain where and why we're taking a different approach.
tRPC: The framework that started a new wave of TypeScript APIs
tRPC is probably the most-hyped framework in the TypeScript API space right now as it made using the import type
approach to type-safe APIs popular.
I was chatting with Alex/KATT, the creator of tRPC, the other day, and he asked me why we're not directly using tRPC in WunderGraph as we could leverage the whole ecosystem of the framework. It's a great question that I'd like to answer here.
First of all, I think tRPC is a great framework, and I'm impressed by the work Alex and the community have done. That being said, there were a few things that didn't quite fit our use case.
One core feature of WunderGraph was and is to compose and integrate APIs through a virtual GraphQL layer. I discussed this earlier, but it's essential for us to allow users to define Operations in the .wundergraph/operations
folder by creating .GraphQL
files. That's how WunderGraph works, and it's a great way to connect different APIs together.
We've introduced the ability to create TypeScript Operations to give our users more flexibility. Pure TypeScript Operations allow you to directly talk to a database, or to compose multiple other APIs together in ways that are not possible with GraphQL. For example, the data manipulation and transformation capabilities of TypeScript are much more powerful than what you can do with GraphQL—even if you're introducing custom directives.
For us, TypeScript Operations are an extension of the existing functionality of WunderGraph. What was important to us was to make sure that we don't have to deal with two different ways of consuming APIs. So, by inheriting the structure, shape, and configuration options of the GraphQL layer, we're able to consume TypeScript Operations in the exact same way as GraphQL Operations. The only difference is that instead of calling one or more GraphQL APIs, we're calling a TypeScript Operation.
Furthermore, WunderGraph already has a plethora of existing features and middlewares like JSON-Schema validation, authentication, authorization, etc., which we're able to re-use for TypeScript Operations. All of these are already implemented in Golang, our language of choice for building the API Gateway of WunderGraph. As you might know, WunderGraph is divided into two parts: the API Gateway written in Golang; and the WunderGraph Server written in TypeScript, which builds upon fastify. As such, it was a clear choice for us to leverage our existing API Gateway and implement a lightweight TypeScript API server on top of it.
With that being said, I'd like to highlight a few things where we're taking a different approach to tRPC.
tRPC is framework-agnostic, WunderGraph is opinionated
One of the great things about tRPC is that it's both framework and transport layer-agnostic. This can be a double-edged sword, however: while it's great that you can use tRPC with any framework you want, there's the drawback that the user is forced to make a lot of decisions.
For example, the guide to using tRPC with Subscriptions explains how to use tRPC with WebSocket Subscriptions:
import { applyWSSHandler } from '@trpc/server/adapters/ws';
import ws from 'ws';
import { appRouter } from './routers/app';
import { createContext } from './trpc';
const wss = new ws.Server({
port: 3001,
});
const handler = applyWSSHandler({ wss, router: appRouter, createContext });
wss.on('connection', (ws) => {
console.log(`➕➕ Connection (${wss.clients.size})`);
ws.once('close', () => {
console.log(`➖➖ Connection (${wss.clients.size})`);
});
});
console.log('✅ WebSocket Server listening on ws://localhost:3001');
process.on('SIGTERM', () => {
console.log('SIGTERM');
handler.broadcastReconnectNotification();
wss.close();
});
There's no such guide in WunderGraph where you must handle WebSocket connections yourself. Our goal with WunderGraph is that the developer can focus on the business logic of their API, which leads us to the next point.
tRPC vs. WunderGraph - Observables vs. Async Generators
While tRPC is using Observables to handle Subscriptions, WunderGraph is using Async Generators.
Here's an example of the tRPC API for Subscriptions:
const ee = new EventEmitter();
const t = initTRPC.create();
export const appRouter = t.router({
onAdd: t.procedure.subscription(() => {
// `resolve()` is triggered for each client when they start subscribing `onAdd`
// return an `observable` with a callback which is triggered immediately
return observable<Post>((emit) => {
const onAdd = (data: Post) => {
// emit data to client
emit.next(data);
};
// trigger `onAdd()` when `add` is triggered in our event emitter
ee.on('add', onAdd);
// unsubscribe function when client disconnects or stops subscribing
return () => {
ee.off('add', onAdd);
};
});
}),
});
And here's the equivalent in WunderGraph:
// .wundergraph/operations/users/subscribe.ts
import {createOperation, z} from '../../generated/wundergraph.factory'
const ee = new EventEmitter();
export default createOperation.subscription({
handler: async function* () {
let resolve: (data: any) => void;
const listener = (data: any) => resolve(data);
try {
let promise = new Promise(res => resolve = res);
ee.on('event', listener);
while (true) {
yield await promise;
promise = new Promise(res => resolve = res);
}
} finally {
ee.off('event', listener);
}
}
})
What's the difference? It might be a personal preference as I mostly develop in Golang, but I think Async Generators are easier to read because the flow is more linear. You can more or less read the code from top to bottom—the same way it's being executed.
Observables, on the other hand, use callbacks and are not as straightforward to read. I prefer to register the event listener and then yield events instead of emitting events and then registering a callback.
tRPC vs. WunderGraph - Code as Router vs Filesystem as Router
tRPC is using a code-based router, while WunderGraph is using a filesystem-based router. Using the filesystem as a router has many advantages. It's easier to understand the context and reasoning behind code as you can see the structure of your API in the filesystem. It's also easier to navigate as you can use your IDE to transport you directly to the file you wish to edit. And last but not least, it's easier to share and reuse code.
Conversely, a code-based router is much more flexible because you're not limited to the filesystem.
tRPC vs. WunderGraph - When you're scaling beyond just TypeScript
It's amazing when you're able to build your entire stack in TypeScript, but there are certain limitations to this approach. You'll eventually run into the situation where you want to write a service in a different language than TypeScript, or you want to integrate with 3rd party services.
In this case, you'll end up manually managing your API dependencies with a pure TypeScript approach. This is where I believe WunderGraph shines. You can start with a pure TypeScript approach and then gradually transition to a more complex setup by integrating more and more internal and external services. We're not just thinking about day one but also offer a solution that scales beyond a small team that's working on a single codebase.
The future of Isomorphic TypeScript APIs
That said, I believe that Isomorphic TypeScript APIs will have a great future ahead of them as they provide an amazing developer experience. After all, that's why we added them to WunderGraph in the first place.
I'm also excited to share some ideas we've got for the future of Isomorphic TypeScript APIs. The current approach is to define single procedures/operations that are independent of each other.
What if we could adopt a pattern similar to GraphQL, where we can define relationships between procedures and allow them to be composed? For example, we could define a User
procedure in the root and then nest a Posts
the procedure inside it.
Here's an example of how this might look:
// .wundergraph/operations/users.ts
import {createOperation, z} from '../../generated/wundergraph.factory'
export default createOperation.query({
handler: async function (args) {
return {
id: 1,
name: 'John Doe',
}
}
})
Now, we could query the User
procedure and get the Posts
procedure as a nested field by specifying the posts
field in the operation.
import { useQuery, withWunderGraph } from '../../components/generated/nextjs';
const Users = () => {
const { data } = useQuery({
operationName: 'users/get',
input: {
id: '1',
},
include: {
posts: true,
},
});
return (
<div style={{ color: 'white' }}>
<div>{data?.id}</div>
<div>{data?.name}</div>
<div>{data?.bio}</div>
</div>
);
};
export default withWunderGraph(Users);
I'm not yet exactly sure on the ergonomics and implementation details of this approach, but this would allow us to have a more GraphQL-like experience, while still being able to enjoy the benefits of type inference.
Do we really need selection sets down to the field level? Or could some way of nesting procedures/resolvers be enough?
On the other hand, not having this kind of functionality will eventually lead to a lot of duplication. As you're scaling your RPC APIs, you'll end up with a lot of procedures that are very similar to each other but with a few small differences because they're solving a slightly different use case.
Conclusion
I hope you enjoyed this article and learned something new about TypeScript and building APIs. I'm excited to see what the future holds for Isomorphic TypeScript APIs, and how they'll evolve. I think that this new style of building APIs will heavily influence how we think about full-stack development in the future.
If you're interested in playing around yourself, you can clone this example and try it out. I've also prepared a GitPod, so you can easily try it out in the browser.
However, one thing to bear in mind is that there's no one-size-fits-all solution. TypeScript RPC APIs are great when both frontend and backend are written in TypeScript. As you're scaling your teams and organizations, you might outgrow this approach and need something more flexible.
WunderGraph allows you to move extremely quickly in the early days of your project with a pure TypeScript approach. Once you hit a certain product market fit, you can gradually transition from a pure TypeScript approach to a more complex setup by integrating more and more internal and external services. That's what we call "from idea to IPO". A framework should be able to support you best in the different stages of your project.
Similarly to how aircraft use flap systems to adjust to different flight conditions, WunderGraph allows you to adjust to different stages of your project. During take off, you can use the pure TypeScript approach to get off the ground quickly. Once you're in the air, full flaps would create too much drag and slow you down. That's when you can gradually transition to leveraging the virtual Graph and split your APIs into smaller services.
At some point, you might even want to allow other developers and companies to integrate with your system through APIs. That's when a generated Postman Collection for all your Operations comes in handy. Your APIs cannot create value if nobody knows about them.
I'd love to hear your thoughts on this topic, so feel free to reach out to me on Twitter or join our Discord server to chat about it.