-
Notifications
You must be signed in to change notification settings - Fork 89
How Black Game Devs works
- Gatsby.js as a React framework. (Solves Pillar 1 and 3)
- MDXjs as a Markdown rendering engine that allows us to interweave react components. (Solves Pillar 1, 2 and 3)
- Lunrjs as a static-site search indexer (Solves Pillar 4)
- Framer Motion for the sauce and feel of the site.
There is a folder in the project called directory
. This folder contains individual entries in the form of MDX files. We have a 1 file per entry rule.
Each file is treated as 1 entry, but they can be filtered/designated by extending the frontmatter yaml of an MDX file. For example, a company can be designated by putting isCompany
at the top. This rule helps us solve merge conflicts that appeared when we used one large object to add/edit/remove people from the directory. It also helps people to find themselves to edit their own data.
Need to pull in old data from the directory? A transformation script leverages json2md to transform all the objects and their keys from the companies/people jsons into individual MDX files. You can run it by executing yarn transform
. This script is meant to only be used once! It will generate mdx files with a _v1
at the end so you know it's a file with data from the previous version of the site.
NOTE: The transformer does its best to get everyone's information in order. For folks who used default values in certain places we skip over that bit of data. You can see how the data gets transformed in the mdxConverter.js file.
Another Note: Previously images were being served via url. This is still possible with the new system, but for the sake of consistency (and lighthouse scores) the transformer downloads entry images from the urls provided. In the case a url leads to a 404 their image will be skipped and not included in the static/directory_images folder.
Previously a lot of this heavy lifting was done in a script on client load. This was one of the issues to scalability and why the site started lagging behind (#122). This time around filters are automatically generated based on data that exists in tags we want to filter. We leverage the different react component fragments (like Games
, Location
, and Skills
) to act as anchors for when we read the raw file to pull out the specifically typed tags. This makes it easier for folks to write in whatever skills they want without being locked into the previous art
, game design
, ect. To allieviate duplicates there are a few algorithms that strip down the text to camelCase and checks that no duplicates share the same key.
ie. game design, Game Design, game DESIGN, and GAME DESIGN, and gameDesign all share the same camelcase key gameDesign
However gamedesign, gam3d3sign, and any other off varations will be treated as an individual filter. You can find the algorithms in the utils.js
folder. You can also find the way we conglomerate filter data in SiteContext.js
lines 45 -> line 81.
NOTE: The filterFragments array MUST match the react component of the same name in the shortcodes
folder or pulling the data won't happen for the fragments defined.
Another note: The filterFragment method is designed to treat individual new line tags as individual filters. It won't solve for #107 where we just want to see if an entry has games. But you can add in that kind of granularity to the SiteContext or the index.js
page where the data is properly filtered.
We leverage Lunr.js and build our search index by leveraging Gatsby's built in GraphQL Sift querying. If you observe the gatsby-node.js
file you can find where we're fetching the data and building the lunrIndex for use on the front end. This article was the inspiration for this execution.
NOTE: While the gatsby-plugin-lunr exists, it doesn't give us frictionless flexibility we need for fuzzy search and improved tokenization so we can improve the search experience. It also includes support for index localization which is out of scope for this project's needs.
In the project itself you'll find a module called search
. Inside of the SearchInput.js
file is where Lunr is being leveraged to run our search. There is documentation in the code regarding exactly what each line is doing. Lunr's documentation is challenging to use, but you can reference these for api usage:
- https://lunrjs.com/docs/lunr.html
- https://lunrjs.com/docs/lunr.Query.html <- The more important one