I have been looking for an excuse to learn Rust for some time now. Although I had read several tutorials and ran through the Exercism.io problems, I am incapable of learning anything new unless I actually get a chance to build something with it. But I didn’t have any good ideas!
At GitHub, we’ve been using GraphQL in production for some time now. A few months ago there was some chatter to define a GraphQL schema using the IDL format. While the idea is super interesting, I realized I had finally found a project worth getting excited about: I could write a GraphQL IDL parser in Rust! Rust has an incredibly awesome FFI system, and, since I had experience wrapping C libraries in Ruby, I decided I’d also trying wrapping a dylib generated by Rust into Ruby as a double whammy.
Why?
Aside from “learn something new,” there are two reasons I embarked on this project:
- I absolutely love text processing problems–whether it ranges from writing a quick regular expression to crafting a grammar in Bison/Flex. I’ve written several context-free grammars in C, and I wanted to see how those same techniques could work in Rust. I’ve also had trouble setting up and distributing the tooling for these parsers, like crafting a CMake file that worked on different platforms or struggling to guess all the necessary
apt-get
packages Travis CI needs whitelisted. A pure Rust implementation that could generate platform-independent packages was very appealing. - Most of the GraphQL tooling, I’m sorry to say, appears to be in JavaScript. While that’s cool1, I really believe that when it comes to creating a tool ecosystem, base implementations ought to be in a language that can be consumed by other projects. In other words, the world doesn’t need different GraphQL IDL parsers in Node, and Ruby, and Python, and Java, and Scala. The heavy-lifting should be done in something like C or Rust, and then those higher level languages ought to make use of that one foundational library. This ensures consistency and correctness, no matter what your backend is written in.
Getting started
Before I even started thinking about an FFI, the first thing I needed to do was write the actual Rust library. Although the GraphQL IDL spec is not formal yet, it already has a BNF-ish format describing the grammar. It appeared to be context-free, so I went poking around crates.io looking for a crate to use. lalrpop seemed like a good package! What I liked about it (aside from the excellent documentation) was that I could write down the IDL grammar almost precisely as the spec had it listed. Some of the other parser libraries required Rust code to define the grammar, which seemed very language specific.
The IDL spec defines several things, but I chose to focus just on the types for now. GraphQL has six distinct types:
- Scalars
- Objects
- Enums
- Interfaces
- Unions
- Input objects
Of these, scalars are by far the easiest things to reason about. A definition in the LALR crate format looks like this:
ScalarTypeDefinition: TypeDefinition = {
<d:Description?> "scalar" <n:Name> => {
TypeDefinition::ScalarType(GraphQLScalar::new(d, n))
}
};
An optional description (Description?
), the keyword scalar
, and a name. That’s it! Each type definition I concentrated on taught me something about working in Rust. For example, working on this simple piece taught me how to work with Option
s, which was a new concept for me.
Once I had scalars defined (the easiest choice), I figured I should move on to the hardest one next–objects. An object definition looks like this:
ObjectTypeDefinition: TypeDefinition = {
<d:Description?> "type" <n:Name> <i:(ImplementsInterfaces?)> <r:(Directives?)> "{" <f:(Fields?)> "}" => {
TypeDefinition::ObjectType(GraphQLObject::new(d, n, i, r, f))
},
};
The first three pieces are nearly identical: optional description, keyword type
, a name. Then things get wacky: there’s an optional comma-separated list of names defining interfaces, and optional list of directives, and an optional list of fields, which have their own complex grammars:
Fields: Vec<GraphQLField> = {
<f:FieldDefinition+> => {
f
}
};
FieldDefinition: GraphQLField = {
<d:Description?> <n:FieldName> <a:(ArgumentsDefinition?)> ":" <t:FieldType> <r:(Directives?)> => {
GraphQLField::new(d, n, t, a, r)
}
};
A field can be one or more field definitions (FieldDefinition+
), and a field definition has its own set of items to parse.
I broke the problem piece-by-piece, continuing to focus on learning new language features. For example, I worked on comma-separated interfaces first, because I didn’t know how to work with vectors (“growable arrays”). Then, I moved onto the fields of an object, and finally circled back to add the directives.
The good news is that the other four GraphQL types are vast simplifications on the same themes. For example, an enum definition is just a pipe-separated list of names; since I knew how to handle a comma-separated list, a pipe-separated list wasn’t much different. Interfaces and input objects also have lists of fields and directives, so I could simply reuse the object grammars there and be done with it.
Don’t repeat yourself
The parts of the Rust language that I really enjoyed working with are implementations and macros.
Before getting to implementations, I have to talk about struct
s and enum
s first. A struct
in Rust can be a C-resembling structure that defines a set of fields and types. For example, for a scalar and an object, they might look like:
pub struct GraphQLScalar {
description: Option<String>,
name: String,
}
pub struct GraphQLObject {
description: Option<String>,
name: String,
implements: Option<Vec<String>>,
directives: Option<Vec<GraphQLDirective>>,
fields: Option<Vec<GraphQLField>>,
}
Rust has typed enums, which is something I’d never heard of before. It’s a bit like a union
in C, though, as opposed to something that serves as integer flags. A Rust enum
is a thing that can represent one of several variants:
pub enum TypeDefinition {
ScalarType(GraphQLScalar),
ObjectType(GraphQLObject),
EnumType(GraphQLEnum),
InterfaceType(GraphQLInterface),
UnionType(GraphQLUnion),
InputObjectType(GraphQLInputObject),
}
When defining the values of a TypeDefinition
, we say that a ScalarType
is just the struct GraphQLScalar
from before, an ObjectType
is the struct GraphQLObject
, and so on.
Here’s where it gets exciting. Within an implementation on TypeDefinition
, you can define a method which is used by all of the types. This acts as a sort of method definition on a superclass, which is inherited by the subclasses (except replace the word class
with struct
😉). Every item in the TypeDefinition
enum
can use the method, after match
ing on the caller’s type. For example:
impl TypeDefinition {
pub fn typename(&self) -> &str {
match *self {
TypeDefinition::ScalarType { } => "scalar",
TypeDefinition::ObjectType { } => "object",
TypeDefinition::EnumType { } => "enum",
TypeDefinition::InterfaceType { } => "interface",
TypeDefinition::UnionType { } => "union",
TypeDefinition::InputObjectType { } => "input_object",
}
}
}
For every type in that enum, I’ve defined a method, typename
. When typename
is called, the match
selector will return a &str
depending on the type of the caller. match
is pretty much just a case
statement, except the compiler will also complain if every condition isn’t met. For example, if I omitted InputObjectType
from above, the Rust compiler would fail with:
--> src/type_definition.rs:217:15
|
217 | match *self {
| ^^^^^ pattern `InputObjectType(_)` not covered
error: aborting due to previous error(s)
For types that had certain properties, I defined methods that matched on just those values:
impl TypeDefinition {
pub fn fields(&self) -> Option<Vec<GraphQLField>> {
match *self {
TypeDefinition::ObjectType(GraphQLObject { ref fields }) |
TypeDefinition::InterfaceType(GraphQLInterface { ref fields }) |
TypeDefinition::InputObjectType(GraphQLInputObject { ref fields }) => {
fields.to_owned()
}
_ => panic!("That method does not exist for this type."),
}
}
}
For situations that got a bit more complicated, I used macros. Rust macros are used to provide metaprogramming to the language. They are obnoxious to read and write, but once you use them, their value is very clear. For example, in the same impl TypeDefinition
, I wrote a macro to define common methods across the types:
impl TypeDefinition {
impl_graphql_objects_common_methods! {
ScalarType:GraphQLScalar,
ObjectType:GraphQLObject,
EnumType:GraphQLEnum,
InterfaceType:GraphQLInterface,
UnionType:GraphQLUnion,
InputObjectType:GraphQLInputObject
}
}
The !
denotes a macro; this one is defined as:
macro_rules! impl_graphql_objects_common_methods {
(
$(
$x:ident:$y:ident
),*
) => {
pub fn name(&self) -> &str {
match *self {
$(
TypeDefinition::$x($y{ ref name }) => &name
),*
}
}
}
}
Yeah, it look confusing. Let’s break it down. Since every GraphQL type has a name, I wanted to add a method name
on all of the types in the enum. I first defined what my pattern of inputs would look like:
$(
$x:ident:$y:ident
),*
It’s two identifiers (ident
) that I have labelled $x
and $y
($x:ident
and $y:ident
), that are separated by a colon, and that are part of a comma-separated sequence (,*
). I can then reuse $x
and $y
throughout the macro definition. The original list I provided (ScalarType:GraphQLScalar
, etc.) just inserts ScalarType
as $x
and GraphQLScalar
as $y
and expands the sequence:
match *self {
TypeDefinition::ScalarType(GraphQLScalar { ref name }) => &name,
TypeDefinition::ObjectType(GraphQLObject { ref name }) => &name,
// ...
}
Like most everything else in Rust, once the concepts and syntaxes click, even the most obtuse parts start to make sense. With macros, the parser was completed much more quickly.2
rustc
is helpful and unforgiving
Much has been said about how tremendously unforgiving the Rust compiler is, and a lot of that is true. Rust is not just a language you can pick up in a few days and be super productive in it. The compiler Knows Everything, and will absolutely call you out on anything it finds dubious. If you think Go complaining about unused package imports are annoying, you haven’t seen anything yet.
Earlier, I talked about how great it is that the compiler will throw an error if all the match
paths aren’t met. Unfortunately, it’s not so fun when it’s angry about moving out of a borrowed context:
error: cannot move out of borrowed content
foo.rs:45 else if SignedInt::abs(dir as i32 - self.direction as i32) == 2 { return 1; }
^~~~
foo.rs:47:21: 47:24 error: use of moved value: `dir`
foo.rs:47 self.direction = dir;
^~~
foo.rs:45:26: 45:29 note: `dir` moved here because it has type `foo::Direction`, which is non-copyable
foo.rs:45 else if SignedInt::abs(dir as i32 - self.direction as i32) == 2 { return 1; }
Programming in Rust “guarantees memory safety” because the compiler does all the analysis, and will refuse to run anything that it deems unsafe. The compiler does a great job at telling you where you screwed up. More often, it’ll try to correct you when you’ve misunderstood variable references and how they’re consumed. In the beginning, it feels like you need a solid understanding of capital-C Computer capital-S Science to understand what certain problems are.
It took me several solid months of repeatedly leaving and returning to this project before I finally began to understand what the compiler was agitated about, to the point where I would write code and catch my errors before compiling it. In a production environment, this is, of course, much better than being allowed to write crummy code. But whereas you have decades of material on writing safe C, there isn’t a lot of content (yet) available on such things as converting strings (&str
vs String
vs Cow
?), or how to safely pass around references.
Having said that, of course, once your program does compile, it’s almost certainly guaranteed to work exactly how you expect it to, barring any logical errors.
Working with the FFI
It took me a month-and-a-half to get just the pure Rust library parser working. And it took just about another five days to write the FFI to C, plus the bindings to Ruby. How did that happen so quickly?
I’ll start by saying that I first looked at two projects, helix and ruru, which are designed to allow Rust code to interoperate with Ruby much more easily. While they looked nice, they required changes to the core Rust library, and that was something I was not willing to do. For example, Helix uses macros to transform Rust code into something that can be called by Ruby:
ruby! {
class Console {
def log(string: String) {
println!("LOG: {}", string);
}
}
}
I didn’t want my Rust library to only support Ruby, and I didn’t want it littered with Ruby-only macros, either. Since my goal was to create a foundational library that could be used in a variety of higher level languages, I figured I would probably have to write my own C glue code. I am a bit humblebrag about this, because as far as I know, no one has written plain C to map between a Rust-generated dylib and Ruby yet–or at least, I couldn’t find any practical examples online.
I went about this incorrectly at first. I had created my Ruby gem, and had imported the Rust parser as a submodule. I began writing the C extension code the way I was used to, but partway through I stopped and realized I had to start over. If you’ve never written a Ruby extension, it’s mostly vanilla C with some additional methods imported from ruby.h
. For example:
struct mystruct {
int a;
int b;
};
static VALUE example_open(VALUE m)
{
struct mystruct *conn;
/**** allocate a struct ****/
conn = ALLOC(struct mystruct);
conn->a = 25;
conn->b = 99;
/**** return a Ruby object that maps to the struct ****/
return Data_Wrap_Struct(c_conn, NULL, mystruct_free, conn);
}
I had begun to write “Ruby C” code instead of understanding first how the plain C mapping should work, so I stopped this idea and began to work on just C bindings first. I was fortunate enough to come across regex-capi, which are C bindings to Rust’s regular expression library. It provided a lot of guidance for how to set up an FFI library in Rust, including embedding a new Cargo.toml into a separate directory as well as writing a simple example.c to test the basics.
This wasn’t the hardest part of the project, but it was definitely the trickiest. Once again, I started with scalars, so I wrote a basic header file for what that looks like:
typedef struct GraphQLScalar {
const char* name;
const char* description;
} GraphQLScalar;
typedef struct GraphQLTypes {
const char* typename;
union
{
GraphQLScalar scalar;
};
} GraphQLTypes;
/* This is the actual method exposed by Rust FFI */
uint8_t gqlidl_parse_schema(char* schema, GraphQLTypes** types, size_t* types_len);
To write an FFI binding in Rust, you write Rust code, but you return C types. For example:
#[repr(C)]
pub struct Scalar {
typename: *const c_char,
name: *const c_char,
description: *const c_char,
}
The work to do the parsing is all in Rust; all the FFI has to do is take that output from Rust and put it in a data format that C can receive. repr(C)
is crazy important, and tells the compiler to position the fields in the same memory layout as a C compiler.
Some of the more fun patterns are when you deal with arrays of custom objects. Here’s a truncated look at the header file for an object:
typedef struct GraphQLField {
const char* description;
const char* name;
} GraphQLField;
typedef struct array_of_fields {
size_t length;
const GraphQLField* data;
} array_of_fields;
typedef struct GraphQLObject {
const char* description;
const char* name;
struct array_of_fields fields;
} GraphQLObject;
In Rust, that looks like:
#[derive(Copy)]
#[repr(C)]
pub struct Object {
typename: *const c_char,
description: *const c_char,
name: *const c_char,
fields: ArrayOfFields,
}
#[derive(Copy, Clone)]
#[repr(C)]
struct Field {
description: *const c_char,
name: *const c_char,
}
#[derive(Copy, Clone)]
#[repr(C)]
struct ArrayOfFields {
length: size_t,
values: *const *const c_void,
}
For brevity, I’ve cut some pieces out, but the idea is that in C, you create a struct that contains a size_t
indicating the number of items in the array, and a pointer to the actual data type (in this case, GraphQLField
). The Rust code behaves similarly: values
is stored as a Vec<Field>
. #[repr(C)]
again ensures that the data crossing the FFI wire is kept in the same memory layout.
A word of caution, if you’re writing your own FFI bindings: std::mem:forget
is your bitter, bitter friend. Since Rust is so good at tracking memory–when objects are used, and when they go out of scope–you have to remember to “forget” arrays and such if they are being sent over the wire back to C. From my understanding, that’s because Rust will free the memory used up by that Rust object, not knowing that it still needs to be made available to the C library. If that’s the case, you’ll still need to track and free the memory at some point in the lifecycle of your program.
Like I said: Computer Science.
I would say that writing the FFI Rust code is a bit of a slog, since you’ve essentially rewritten the same structure three different times:
- A bunch of
struct
s in the pure Rust lib - A bunch of
struct
s in the Rust FFI lib, returning C data types - A bunch of
struct
s in the C header
But, since I knew how I wanted the C structures to look like, it wasn’t too bad to create that second set of Rust FFI lib struct
s.
After populating the C header file, I wrote a very small program as a proof-of-concept. This was intended to be a sanity check before writing full-fledged tests in C:
int main() {
GraphQLTypes* types = NULL;
size_t types_len = 0;
uint8_t err;
err = gqlidl_parse_schema("# Yeah yeah\nscalar DateTime", &types, &types_len);
if (err > 0) {
printf("Error: Return code %d", err);
exit(err);
}
for (size_t i = 0; i < types_len; i++) {
printf("typename: %s\n", types[i].typename);
printf("desc: %s\n", types[i].scalar.description);
printf("name: %s\n", types[i].scalar.name);
if (strncmp(types[i].typename, "scalar", 6) == 0) {
printf("a scalar!");
}
}
return err;
}
Wrap it in Ruby
The Ruby part of this project is actually a bit dull! Once I had the C bindings to the Rust-generated dylib, all I had to do was make sure I could compile an extension that made use of that library.
I always use rake-compiler for Ruby extensions (as opposed to ffi), mostly because I don’t want the ffi
DSL to abstract away what the code I’m writing is actually doing.
The extconf.rb is surprisingly simple. Here’s the good stuff:
# FFI_DIR points to the submodule's target directory
`cargo build #{target} --manifest-path #{File.join(FFI_DIR, 'Cargo.toml')}`
$LIBS << ' -lgraphqlidlparser'
HEADER_DIRS = [INCLUDEDIR, HEADER_DIR]
LIB_DIRS = [LIBDIR, RELEASE_DIR]
dir_config('graphql-idl-parser', HEADER_DIRS, LIB_DIRS)
$CFLAGS << " #{flag} -std=c11 -Wall -pedantic -Werror -Wfatal-errors -Wstrict-aliasing"
create_makefile('graphql-idl-parser/graphqlidlparser')
The C extension code is not much different than the example posted above, it just uses some of the "ruby.h"
methods mentioned earlier. Here’s a snippet:
static VALUE GRAPHQLIDLPARSERPROCESS_process(VALUE self)
{
VALUE rb_schema = rb_iv_get(self, "@schema");
VALUE rb_result = rb_obj_alloc(rb_cResult);
VALUE rb_types;
VALUE rb_hash;
GraphQLTypes* types = NULL;
size_t types_len = 0;
uint8_t err;
const char *schema = StringValueCStr(rb_schema);
err = gqlidl_parse_schema(schema, &types, &types_len);
if (err > 0) {
printf("Error: Return code %d", err);
exit(err);
}
rb_types = rb_ary_new();
/**** attr_reader :types ****/
rb_define_attr(rb_cResult, "types", 1, 0);
for (size_t i = 0; i < types_len; i++) {
rb_hash = rb_hash_new();
if (strcmp(types[i].typename, "scalar") == 0) {
rb_hash_aset(rb_hash, CSTR2SYM("typename"), convert_string(types[i].typename));
rb_hash_aset(rb_hash, CSTR2SYM("name"), convert_string(types[i].scalar_type.name));
rb_hash_aset(rb_hash, CSTR2SYM("description"), convert_string(types[i].scalar_type.description));
}
rb_ary_push(rb_types, rb_hash);
}
rb_iv_set(rb_result, "@types", rb_types);
return rb_result;
}
“Benchmarks”
This isn’t really a fair comparison, but once the project was done, I decided to compare graphql-ruby’s IDL loading with my Rust-backed gem.
You can fetch the IDL with:
curl -u gjtorikian:${GITHUB_TOKEN} -H "Accept: application/vnd.github.idl+json" https://api.github.com/graphql
It’s about 180KB and 7,000 lines long.
Warming up --------------------------------------
pure ruby 1.000 i/100ms
this gem 7.000 i/100ms
Calculating -------------------------------------
pure ruby 6.527 (± 0.0%) i/s - 33.000 in 5.082540s
this gem 73.511 (± 5.4%) i/s - 371.000 in 5.064868s
Comparison:
this gem : 73.5 i/s
pure ruby: 6.5 i/s - 11.26x slower
I consider the project a success!
Links
- graphql-idl-parser – the Rust library
- graphql-idl-parser-ruby – the Ruby library
Acknowledgements
Both https://users.rust-lang.org/ and the IRC channel are filled with tremendously helpful people. The Rust FFI Omnibus guide is also terrific. This Reddit thread and the projects in the comments were crucial to making things finally click for me.
-
Just kidding, I hate it. ↩
-
See this “Macros are pretty cool” commit for an example of how much code I could deduplicate. ↩