Kā izstrādāt un izvietot mikro-frontendus ar Single-SPA

Mikrofonendas ir priekšgala tīmekļa attīstības nākotne.

Iedvesmojoties no mikropakalpojumiem, kas ļauj sadalīt aizmuguri mazākās daļās, mikro priekšpuses ļauj veidot, testēt un izvietot priekšējās lietotnes daļas neatkarīgi viens no otra.

Atkarībā no izvēlētās mikro-frontend ietvara jums pat var būt vairākas mikro-frontend lietotnes - rakstītas React, Angular, Vue vai jebkas cits - mierīgi līdzāspastāvot tajā pašā lielākajā lietotnē.

Šajā rakstā mēs izstrādāsim lietotni, kas sastāv no mikro-frontendiem, izmantojot vienu spa, un izvietosim to Heroku.

Mēs izveidosim nepārtrauktu integrāciju, izmantojot Travis CI. Katrs CI cauruļvads apvienos Java priekšpiederuma lietotnes JavaScript un pēc tam augšupielādēs izveidotos artefaktus AWS S3.

Visbeidzot, mēs atjaunināsim vienu no mikro-frontend lietotnēm un redzēsim, kā to var izvietot ražošanā neatkarīgi no citām mikro-frontend lietotnēm.

Pārskats par demonstrācijas lietotni

Demonstrācijas lietotne - gala rezultāts

Pirms apspriežam soli pa solim sniegtos norādījumus, saņemsim ātru pārskatu par to, kas veido demonstrācijas lietotni. Šo lietotni veido četras apakšprogrammas:

  1. Konteinera lietotne, kas kalpo kā galvenās lapas konteiners un koordinē mikro-frontend lietotņu montāžu un atvienošanu
  2. Mikrosistēmas navigācijas joslas lietotne, kas vienmēr atrodas lapā
  3. Mikrofonendas lietotne “1. lapa”, kas tiek rādīta tikai tad, kad tā ir aktīva
  4. Mikrofonendas lietotne “2. lapa”, kas tiek rādīta arī tikai tad, kad tā ir aktīva

Šīs četras lietotnes dzīvo atsevišķās repos, kas pieejamas vietnē GitHub, kuru esmu saistījis ar iepriekš.

Gala rezultāts lietotāja interfeisa ziņā ir diezgan vienkāršs, taču, lai būtu skaidrs, šeit nav lietotāja interfeisa.

Ja sekojat līdzi savai mašīnai, šī raksta beigās jums būs arī visa pamatā esošā infrastruktūra, kas nepieciešama, lai sāktu darbu ar savu mikro-frontend lietotni.

Labi, paķer savu akvalangu, jo ir pienācis laiks ienirt!

Konteinera lietotnes izveide

Lai ģenerētu lietotnes šim demonstrējumam, mēs izmantosim komandrindas saskarnes (CLI) rīku, ko sauc par izveidi-vienu-spa. Raksta izveidošanas laikā izveidojiet “single-spa” versija ir 1.10.0, un ar CLI instalētā viena spa versija ir 4.4.2.

Mēs izveidosim konteinera lietotni (to arī dēvē par saknes konfigurāciju), veicot šīs darbības:

mkdir single-spa-demo cd single-spa-demo mkdir single-spa-demo-root-config cd single-spa-demo-root-config npx create-single-spa

Pēc tam mēs izpildīsim CLI uzvednes:

  1. Atlasiet “vienas spa saknes konfigurācija”
  2. Atlasiet “dzija” vai “npm” (es izvēlējos “dzija”)
  3. Ievadiet organizācijas nosaukumu (es izmantoju “thawkin3”, bet tas var būt jebkurš, ko vēlaties)

Lieliski! Tagad, pārbaudot single-spa-demo-root-configdirektoriju, jums vajadzētu redzēt skeleta saknes konfigurācijas lietotni. Mēs to nedaudz pielāgosim, bet vispirms izmantosim arī CLI rīku, lai izveidotu pārējās trīs mikro-frontend lietotnes.

Micro-Frontend lietotņu izveide

Lai izveidotu mūsu pirmo mikro-frontend lietotni, navigācijas joslu, mēs izpildīsim šīs darbības:

cd .. mkdir single-spa-demo-nav cd single-spa-demo-nav npx create-single-spa

Pēc tam mēs izpildīsim CLI uzvednes:

  1. Atlasiet “vienas spa lietojumprogramma / paka”
  2. Atlasiet “reaģēt”
  3. Atlasiet “dzija” vai “npm” (es izvēlējos “dzija”)
  4. Ievadiet organizācijas nosaukumu, to pašu, ko izmantojāt, veidojot root konfigurācijas lietotni (manā gadījumā “thawkin3”)
  5. Ievadiet projekta nosaukumu (es izmantoju “single-spa-demo-nav”)

Tagad, kad esam izveidojuši navigācijas joslas lietotni, mēs varam veikt šīs pašas darbības, lai izveidotu mūsu divu lapu lietotnes. Katru vietu, kuru redzam “single-spa-demo-nav”, mēs pirmo reizi aizstāsim ar “single-spa-demo-page-1” un pēc tam ar “single-spa-demo-page-2”. otro reizi cauri.

Šajā brīdī mēs esam izveidojuši visas četras nepieciešamās lietotnes: vienu konteinera lietotni un trīs mikro-frontend lietotnes. Tagad ir pienācis laiks savienot mūsu lietotnes kopā.

Micro-Frontend lietotņu reģistrēšana lietotnē Container

Kā minēts iepriekš, viens no konteinera lietotnes galvenajiem pienākumiem ir koordinēt, kad katra lietotne ir “aktīva” vai nē. Citiem vārdiem sakot, tas rīkojas, kad katra lietotne ir jāparāda vai jāslēpj.

Lai palīdzētu konteinera lietotnei saprast, kad katra lietotne ir jāparāda, mēs piedāvājam tai tā sauktās “darbības funkcijas”. Katrai lietotnei ir darbības funkcija, kas vienkārši atgriež patieso vai nepatieso logisko vērtību neatkarīgi no tā, vai lietotne pašlaik ir aktīva.

Inside single-spa-demo-root-configdirektorijā, jo activity-functions.jsfailu, mēs rakstīt šādas darbības funkcijas trim mūsu mikro frontālo progr.

export function prefix(location, ...prefixes) { return prefixes.some( prefix => location.href.indexOf(`${location.origin}/${prefix}`) !== -1 ); } export function nav() { // The nav is always active return true; } export function page1(location) { return prefix(location, 'page1'); } export function page2(location) { return prefix(location, 'page2'); }

Pēc tam mums jāreģistrē mūsu trīs mikro-frontend lietotnes ar vienu spa. Lai to izdarītu, mēs izmantojam registerApplicationfunkciju. Šī funkcija pieņem vismaz trīs argumentus: lietotnes nosaukumu, lietotnes ielādes metodi un darbības funkciju, lai noteiktu, kad lietotne ir aktīva.

Inside single-spa-demo-root-configdirektorijā, jo root-config.jsfailu, mēs pievienot šādu kodu, lai reģistrētu mūsu progr:

import { registerApplication, start } from "single-spa"; import * as isActive from "./activity-functions"; registerApplication( "@thawkin3/single-spa-demo-nav", () => System.import("@thawkin3/single-spa-demo-nav"), isActive.nav ); registerApplication( "@thawkin3/single-spa-demo-page-1", () => System.import("@thawkin3/single-spa-demo-page-1"), isActive.page1 ); registerApplication( "@thawkin3/single-spa-demo-page-2", () => System.import("@thawkin3/single-spa-demo-page-2"), isActive.page2 ); start();

Tagad, kad esam iestatījuši aktivitātes funkcijas un reģistrējuši savas lietotnes, pēdējais solis, pirms mēs varam sākt to darboties lokāli, ir atjaunināt vietējo importēšanas karti index.ejsfaila iekšpusē tajā pašā direktorijā.

headTaga iekšpusē mēs pievienosim šādu kodu, lai norādītu, kur katra lietotne ir atrodama, darbojoties lokāli:

  { "imports": { "@thawkin3/root-config": "//localhost:9000/root-config.js", "@thawkin3/single-spa-demo-nav": "//localhost:9001/thawkin3-single-spa-demo-nav.js", "@thawkin3/single-spa-demo-page-1": "//localhost:9002/thawkin3-single-spa-demo-page-1.js", "@thawkin3/single-spa-demo-page-2": "//localhost:9003/thawkin3-single-spa-demo-page-2.js" } }   

Katrā lietotnē ir savs startēšanas skripts, kas nozīmē, ka katra lietotne vietējās attīstības laikā lokāli darbosies savā attīstības serverī. Kā redzat, mūsu navigācijas lietotne atrodas 9001. portā, mūsu 1. lappuses programma ir 9002. un 2. lappuses programma ir 9003. ostā.

Izmēģinot mūsu trīs lietotnes, izmēģiniet mūsu lietotni.

Testa skrējiens vietējai skriešanai

Lai mūsu lietotne darbotos lokāli, mēs varam rīkoties šādi:

  1. Atveriet četras termināļa cilnes, pa vienai katrai lietotnei
  2. Saknes konfigurācijai single-spa-demo-root-configdirektorijā: yarn start(pēc noklusējuma darbojas 9000. portā)
  3. Nav lietotnes single-spa-demo-navdirektorijā:yarn start --port 9001
  4. 1. lapas lietotnei single-spa-demo-page-1direktorijā:yarn start --port 9002
  5. 2. lapas lietotnei single-spa-demo-page-2direktorijā:yarn start --port 9003

Tagad mēs pārlūkprogrammā pārvietosimies uz vietni // localhost: 9000, lai apskatītu mūsu lietotni.

Mums vajadzētu redzēt ... tekstu! Super aizraujoši.

Demonstrācijas lietotne - galvenā lapa

Mūsu galvenajā lapā tiek parādīta navigācijas josla, jo navigācijas joslas lietotne vienmēr ir aktīva.

Tagad ejam uz // localhost: 9000 / page1. Kā redzams iepriekšējās mūsu darbības funkcijās, esam norādījuši, ka 1. lapas lietotnei jābūt aktīvai (parādītai), kad URL ceļš sākas ar “1. lapa”. Tātad, tas aktivizē 1. lapas lietotni, un mums tagad vajadzētu redzēt gan navigācijas joslas, gan 1. lapas lietotnes tekstu.

Demonstrācijas lietotne - 1. lappuses maršruts

One more time, let’s now navigate to //localhost:9000/page2. As expected, this activates the page 2 app, so we should see the text for the navbar and the page 2 app now.

Demonstrācijas lietotne - 2. lpp. Maršruts

Making Minor Tweaks to the Apps

So far our app isn’t very exciting to look at, but we do have a working micro-frontend setup running locally. If you aren’t cheering in your seat right now, you should be!

Let’s make some minor improvements to our apps so they look and behave a little nicer.

Specifying the Mount Containers

First, if you refresh your page over and over when viewing the app, you may notice that sometimes the apps load out of order, with the page app appearing above the navbar app.

This is because we haven’t actually specified where each app should be mounted. The apps are simply loaded by SystemJS, and then whichever app finishes loading fastest gets appended to the page first.

We can fix this by specifying a mount container for each app when we register them.

In our index.ejs file that we worked in previously, let's add some HTML to serve as the main content containers for the page:

Then, in our root-config.js file where we've registered our apps, let's provide a fourth argument to each function call that includes the DOM element where we'd like to mount each app:

import { registerApplication, start } from "single-spa"; import * as isActive from "./activity-functions"; registerApplication( "@thawkin3/single-spa-demo-nav", () => System.import("@thawkin3/single-spa-demo-nav"), isActive.nav, { domElement: document.getElementById('nav-container') } ); registerApplication( "@thawkin3/single-spa-demo-page-1", () => System.import("@thawkin3/single-spa-demo-page-1"), isActive.page1, { domElement: document.getElementById('page-1-container') } ); registerApplication( "@thawkin3/single-spa-demo-page-2", () => System.import("@thawkin3/single-spa-demo-page-2"), isActive.page2, { domElement: document.getElementById('page-2-container') } ); start();

Now, the apps will always be mounted to a specific and predictable location. Nice!

Styling the App

Next, let’s style up our app a bit. Plain black text on a white background isn’t very interesting to look at.

In the single-spa-demo-root-config directory, in the index.ejs file again, we can add some basic styles for the whole app by pasting the following CSS at the bottom of the head tag:

 body, html { margin: 0; padding: 0; font-size: 16px; font-family: Arial, Helvetica, sans-serif; height: 100%; } body { display: flex; flex-direction: column; } * { box-sizing: border-box; } 

Next, we can style our navbar app by finding the single-spa-demo-nav directory, creating a root.component.css file, and adding the following CSS:

.nav { display: flex; flex-direction: row; padding: 20px; background: #000; color: #fff; } .link { margin-right: 20px; color: #fff; text-decoration: none; } .link:hover, .link:focus { color: #1098f7; }

We can then update the root.component.js file in the same directory to import the CSS file and apply those classes and styles to our HTML. We'll also change the navbar content to actually contain two links so we can navigate around the app by clicking the links instead of entering a new URL in the browser's address bar.

import React from "react"; import "./root.component.css"; export default function Root() { return (   Page 1   Page 2   ); }

We’ll follow a similar process for the page 1 and page 2 apps as well. We’ll create a root.component.css file for each app in their respective project directories and update the root.component.js files for both apps too.

For the page 1 app, the changes look like this:

.container1 { background: #1098f7; color: white; padding: 20px; display: flex; align-items: center; justify-content: center; flex: 1; font-size: 3rem; }
import React from "react"; import "./root.component.css"; export default function Root() { return ( 

Page 1 App

); }

And for the page 2 app, the changes look like this:

.container2 { background: #9e4770; color: white; padding: 20px; display: flex; align-items: center; justify-content: center; flex: 1; font-size: 3rem; }
import React from "react"; import "./root.component.css"; export default function Root() { return ( 

Page 2 App

); }

Adding React Router

The last small change we’ll make is to add React Router to our app. Right now the two links we’ve placed in the navbar are just normal anchor tags, so navigating from page to page causes a page refresh. Our app will feel much smoother if the navigation is handled client-side with React Router.

To use React Router, we’ll first need to install it. From the terminal, in the single-spa-demo-nav directory, we'll install React Router using yarn by entering yarn add react-router-dom. (Or if you're using npm, you can enter npm install react-router-dom.)

Then, in the single-spa-demo-nav directory in the root.component.js file, we'll replace our anchor tags with React Router's Link components like so:

import React from "react"; import { BrowserRouter, Link } from "react-router-dom"; import "./root.component.css"; export default function Root() { return (    Page 1   Page 2    ); }

Cool. That looks and works much better!

Demonstrācijas lietotne - veidota un izmantota React Router

Getting Ready for Production

At this point we have everything we need to continue working on the app while running it locally. But how do we get it hosted somewhere publicly available?

There are several possible approaches we can take using our tools of choice, but the main tasks are:

  1. to have somewhere we can upload our build artifacts, like a CDN, and
  2. to automate this process of uploading artifacts each time we merge new code into the master branch.

For this article, we’re going to use AWS S3 to store our assets, and we’re going to use Travis CI to run a build job and an upload job as part of a continuous integration pipeline.

Let’s get the S3 bucket set up first.

Setting up the AWS S3 Bucket

It should go without saying, but you’ll need an AWS account if you’re following along here.

If we are the root user on our AWS account, we can create a new IAM user that has programmatic access only. This means we’ll be given an access key ID and a secret access key from AWS when we create the new user. We’ll want to store these in a safe place since we’ll need them later.

Finally, this user should be given permissions to work with S3 only, so that the level of access is limited if our keys were to fall into the wrong hands.

AWS has some great resources for best practices with access keys and managing access keys for IAM users that would be well worth checking out if you’re unfamiliar with how to do this.

Next we need to create an S3 bucket. S3 stands for Simple Storage Service and is essentially a place to upload and store files hosted on Amazon’s servers. A bucket is simply a directory.

I’ve named my bucket “single-spa-demo,” but you can name yours whatever you’d like. You can follow the AWS guides for how to create a new bucket for more info.

AWS S3 bucket

Once we have our bucket created, it’s also important to make sure the bucket is public and that CORS (cross-origin resource sharing) is enabled for our bucket so that we can access and use our uploaded assets in our app.

In the permissions for our bucket, we can add the following CORS configuration rules:

  * GET  

In the AWS console, it ends up looking like this after we hit Save:

Creating a Travis CI Job to Upload Artifacts to AWS S3

Now that we have somewhere to upload files, let’s set up an automated process that will take care of uploading new JavaScript bundles each time we merge new code into the master branch for any of our repos.

To do this, we’re going to use Travis CI. As mentioned earlier, each app lives in its own repo on GitHub, so we have four GitHub repos to work with. We can integrate Travis CI with each of our repos and set up continuous integration pipelines for each one.

To configure Travis CI for any given project, we create a .travis.yml file in the project's root directory. Let's create that file in the single-spa-demo-root-config directory and insert the following code:

language: node_js node_js: - node script: - yarn build - echo "Commit sha - $TRAVIS_COMMIT" - mkdir -p dist/@thawkin3/root-config/$TRAVIS_COMMIT - mv dist/*.* dist/@thawkin3/root-config/$TRAVIS_COMMIT/ deploy: provider: s3 access_key_id: "$AWS_ACCESS_KEY_ID" secret_access_key: "$AWS_SECRET_ACCESS_KEY" bucket: "single-spa-demo" region: "us-west-2" cache-control: "max-age=31536000" acl: "public_read" local_dir: dist skip_cleanup: true on: branch: master

This implementation is what I came up with after reviewing the Travis CI docs for AWS S3 uploads and a single-spa Travis CI example config.

Because we don’t want our AWS secrets exposed in our GitHub repo, we can store those as environment variables. You can place environment variables and their secret values within the Travis CI web console for anything that you want to keep private, so that’s where the .travis.yml file gets those values from.

Now, when we commit and push new code to the master branch, the Travis CI job will run, which will build the JavaScript bundle for the app and then upload those assets to S3. To verify, we can check out the AWS console to see our newly uploaded files:

Uploaded files as a result of a Travis CI job

Neat! So far so good. Now we need to implement the same Travis CI configuration for our other three micro-frontend apps, but swapping out the directory names in the .travis.yml file as needed. After following the same steps and merging our code, we now have four directories created in our S3 bucket, one for each repo.

Four directories within our S3 bucket

Creating an Import Map for Production

Let’s recap what we’ve done so far. We have four apps, all living in separate GitHub repos. Each repo is set up with Travis CI to run a job when code is merged into the master branch, and that job handles uploading the build artifacts into an S3 bucket.

With all that in one place, there’s still one thing missing: How do these new build artifacts get referenced in our container app? In other words, even though we’re pushing up new JavaScript bundles for our micro-frontends with each new update, the new code isn’t actually used in our container app yet!

If we think back to how we got our app running locally, we used an import map. This import map is simply JSON that tells the container app where each JavaScript bundle can be found.

But, our import map from earlier was specifically used for running the app locally. Now we need to create an import map that will be used in the production environment.

If we look in the single-spa-demo-root-config directory, in the index.ejs file, we see this line:

Opening up that URL in the browser reveals an import map that looks like this:

{ "imports": { "react": "//cdn.jsdelivr.net/npm/[email protected]/umd/react.production.min.js", "react-dom": "//cdn.jsdelivr.net/npm/[email protected]/umd/react-dom.production.min.js", "single-spa": "//cdn.jsdelivr.net/npm/[email protected]/lib/system/single-spa.min.js", "@react-mf/root-config": "//react.microfrontends.app/root-config/e129469347bb89b7ff74bcbebb53cc0bb4f5e27f/react-mf-root-config.js", "@react-mf/navbar": "//react.microfrontends.app/navbar/631442f229de2401a1e7c7835dc7a56f7db606ea/react-mf-navbar.js", "@react-mf/styleguide": "//react.microfrontends.app/styleguide/f965d7d74e99f032c27ba464e55051ae519b05dd/react-mf-styleguide.js", "@react-mf/people": "//react.microfrontends.app/people/dd205282fbd60b09bb3a937180291f56e300d9db/react-mf-people.js", "@react-mf/api": "//react.microfrontends.app/api/2966a1ca7799753466b7f4834ed6b4f2283123c5/react-mf-api.js", "@react-mf/planets": "//react.microfrontends.app/planets/5f7fc62b71baeb7a0724d4d214565faedffd8f61/react-mf-planets.js", "@react-mf/things": "//react.microfrontends.app/things/7f209a1ed9ac9690835c57a3a8eb59c17114bb1d/react-mf-things.js", "rxjs": "//cdn.jsdelivr.net/npm/@esm-bundle/[email protected]/system/rxjs.min.js", "rxjs/operators": "//cdn.jsdelivr.net/npm/@esm-bundle/[email protected]/system/rxjs-operators.min.js" } }

That import map was the default one provided as an example when we used the CLI to generate our container app. What we need to do now is replace this example import map with an import map that actually references the bundles we’re using.

So, using the original import map as a template, we can create a new file called importmap.json, place it outside of our repos and add JSON that looks like this:

{ "imports": { "react": "//cdn.jsdelivr.net/npm/[email protected]/umd/react.production.min.js", "react-dom": "//cdn.jsdelivr.net/npm/[email protected]/umd/react-dom.production.min.js", "single-spa": "//cdn.jsdelivr.net/npm/[email protected]/lib/system/single-spa.min.js", "@thawkin3/root-config": "//single-spa-demo.s3-us-west-2.amazonaws.com/%40thawkin3/root-config/179ba4f2ce4d517bf461bee986d1026c34967141/root-config.js", "@thawkin3/single-spa-demo-nav": "//single-spa-demo.s3-us-west-2.amazonaws.com/%40thawkin3/single-spa-demo-nav/f0e9d35392ea0da8385f6cd490d6c06577809f16/thawkin3-single-spa-demo-nav.js", "@thawkin3/single-spa-demo-page-1": "//single-spa-demo.s3-us-west-2.amazonaws.com/%40thawkin3/single-spa-demo-page-1/4fd417ee3faf575fcc29d17d874e52c15e6f0780/thawkin3-single-spa-demo-page-1.js", "@thawkin3/single-spa-demo-page-2": "//single-spa-demo.s3-us-west-2.amazonaws.com/%40thawkin3/single-spa-demo-page-2/8c58a825c1552aab823bcbd5bdd13faf2bd4f9dc/thawkin3-single-spa-demo-page-2.js" } }

You’ll note that the first three imports are for shared dependencies: react, react-dom, and single-spa. That way we don’t have four copies of React in our app causing bloat and longer download times. Next, we have imports for each of our four apps. The URL is simply the URL for each uploaded file in S3 (called an “object” in AWS terminology).

Now that we have this file created, we can manually upload it to our bucket in S3 through the AWS console.

Note: This is a pretty important and interesting caveat when using single-spa: The import map doesn’t actually live anywhere in source control or in any of the git repos. That way, the import map can be updated on the fly without requiring checked-in changes in a repo. We’ll come back to this concept in a little bit.

Import map manually uploaded to the S3 bucket

Finally, we can now reference this new file in our index.ejs file instead of referencing the original import map.

Creating a Production Server

We are getting closer to having something up and running in production! We’re going to host this demo on Heroku, so in order to do that, we’ll need to create a simple Node.js and Express server to serve our file.

First, in the single-spa-demo-root-config directory, we'll install express by running yarn add express (or npm install express). Next, we'll add a file called server.js that contains a small amount of code for starting up an express server and serving our main index.html file.

const express = require("express"); const path = require("path"); const PORT = process.env.PORT || 5000; express() .use(express.static(path.join(__dirname, "dist"))) .get("*", (req, res) => { res.sendFile("index.html", { root: "dist" }); }) .listen(PORT, () => console.log(`Listening on ${PORT}`));

Finally, we’ll update the NPM scripts in our package.json file to differentiate between running the server in development mode and running the server in production mode.

"scripts": { "build": "webpack --mode=production", "lint": "eslint src", "prettier": "prettier --write './**'", "start:dev": "webpack-dev-server --mode=development --port 9000 --env.isLocal=true", "start": "node server.js", "test": "jest" }

Deploying to Heroku

Now that we have a production server ready, let’s get this thing deployed to Heroku! In order to do so, you’ll need to have a Heroku account created, the Heroku CLI installed, and be logged in. Deploying to Heroku is as easy as 1–2–3:

  1. In the single-spa-demo-root-config directory: heroku create thawkin3-single-spa-demo (changing that last argument to a unique name to be used for your Heroku app)
  2. git push heroku master
  3. heroku open

And with that, we are up and running in production. Upon running the heroku open command, you should see your app open in your browser. Try navigating between pages using the nav links to see the different micro-frontend apps mount and unmount.

Demo app — up and running in production

Making Updates

At this point, you may be asking yourself, “All that work for this? Why?” And you’d be right. Sort of. This is a lot of work, and we don’t have much to show for it, at least not visually. But, we’ve laid the groundwork for whatever app improvements we’d like.

The setup cost for any microservice or micro-frontend is often a lot higher than the setup cost for a monolith; it’s not until later that you start to reap the rewards.

So let’s start thinking about future modifications. Let’s say that it’s now five or ten years later, and your app has grown. A lot. And, in that time, a hot new framework has been released, and you’re dying to re-write your entire app using that new framework.

When working with a monolith, this would likely be a years-long effort and may be nearly impossible to accomplish. But, with micro-frontends, you could swap out technologies one piece of the app at a time, allowing you to slowly and smoothly transition to a new tech stack. Magic!

Or, you may have one piece of your app that changes frequently and another piece of your app that is rarely touched. While making updates to the volatile app, wouldn’t it be nice if you could just leave the legacy code alone?

With a monolith, it’s possible that changes you make in one place of your app may affect other sections of your app. What if you modified some stylesheets that multiple sections of the monolith were using? Or what if you updated a dependency that was used in many different places?

With a micro-frontend approach, you can leave those worries behind, refactoring and updating one app where needed while leaving legacy apps alone.

But, how do you make these kinds of updates? Or updates of any sort, really?

Right now we have our production import map in our index.ejs file, but it's just pointing to the file we manually uploaded to our S3 bucket. If we wanted to release some new changes right now, we'd need to push new code for one of the micro-frontends, get a new build artifact, and then manually update the import map with a reference to the new JavaScript bundle.

Is there a way we could automate this? Yes!

Updating One of the Apps

Let’s say we want to update our page 1 app to have different text showing. In order to automate the deployment of this change, we can update our CI pipeline to not only build an artifact and upload it to our S3 bucket, but to also update the import map to reference the new URL for the latest JavaScript bundle.

Let’s start by updating our .travis.yml file like so:

language: node_js node_js: - node env: global: # include $HOME/.local/bin for `aws` - PATH=$HOME/.local/bin:$PATH before_install: - pyenv global 3.7.1 - pip install -U pip - pip install awscli script: - yarn build - echo "Commit sha - $TRAVIS_COMMIT" - mkdir -p dist/@thawkin3/root-config/$TRAVIS_COMMIT - mv dist/*.* dist/@thawkin3/root-config/$TRAVIS_COMMIT/ deploy: provider: s3 access_key_id: "$AWS_ACCESS_KEY_ID" secret_access_key: "$AWS_SECRET_ACCESS_KEY" bucket: "single-spa-demo" region: "us-west-2" cache-control: "max-age=31536000" acl: "public_read" local_dir: dist skip_cleanup: true on: branch: master after_deploy: - chmod +x after_deploy.sh - "./after_deploy.sh"

The main changes here are adding a global environment variable, installing the AWS CLI, and adding an after_deploy script as part of the pipeline. This references an after_deploy.sh file that we need to create. The contents will be:

echo "Downloading import map from S3" aws s3 cp s3://single-spa-demo/@thawkin3/importmap.json importmap.json echo "Updating import map to point to new version of @thawkin3/root-config" node update-importmap.mjs echo "Uploading new import map to S3" aws s3 cp importmap.json s3://single-spa-demo/@thawkin3/importmap.json --cache-control 'public, must-revalidate, max-age=0' --acl 'public-read' echo "Deployment successful"

This file downloads the existing import map from S3, modifies it to reference the new build artifact, and then re-uploads the updated import map to S3. To handle the actual updating of the import map file’s contents, we use a custom script that we’ll add in a file called update-importmap.mjs.

// Note that this file requires [email protected] or higher (or the --experimental-modules flag) import fs from "fs"; import path from "path"; import https from "https"; const importMapFilePath = path.resolve(process.cwd(), "importmap.json"); const importMap = JSON.parse(fs.readFileSync(importMapFilePath)); const url = `//single-spa-demo.s3-us-west-2.amazonaws.com/%40thawkin3/root-config/${process.env.TRAVIS_COMMIT}/root-config.js`; https .get(url, res => { // HTTP redirects (301, 302, etc) not currently supported, but could be added if (res.statusCode >= 200 && res.statusCode  { urlNotDownloadable(url, err); }); function urlNotDownloadable(url, err) { throw Error( `Refusing to update import map - could not download javascript file at url ${url}. Error was '${err.message}'` ); }

Note that we need to make these changes for these three files in all of our GitHub repos so that each one is able to update the import map after creating a new build artifact.

The file contents will be nearly identical for each repo, but we’ll need to change the app names or URL paths to the appropriate values for each one.

A Side Note on the Import Map

Earlier I mentioned that the import map file we manually uploaded to S3 doesn’t actually live anywhere in any of our GitHub repos or in any of our checked-in code. If you’re like me, this probably seems really odd! Shouldn’t everything be in source control?

The reason it’s not in source control is so that our CI pipeline can handle updating the import map with each new micro-frontend app release.

If the import map were in source control, making an update to one micro-frontend app would require changes in two repos: the micro-frontend app repo where the change is made, and the root config repo where the import map would be checked in. This sort of setup would invalidate one of micro-frontend architecture’s main benefits, which is that each app can be deployed completely independent of the other apps.

In order to achieve some level of source control on the import map, we can always use S3’s versioning feature for our bucket.

Moment of Truth

With those modifications to our CI pipelines in place, it’s time for the final moment of truth: Can we update one of our micro-frontend apps, deploy it independently, and then see those changes take effect in production without having to touch any of our other apps?

In the single-spa-demo-page-1 directory, in the root.component.js file, let's change the text from "Page 1 App" to "Page 1 App - UPDATED!" Next, let's commit that change and push and merge it to master.

This will kick off the Travis CI pipeline to build the new page 1 app artifact and then update the import map to reference that new file URL.

Ja pēc tam pārlūkprogrammā dodamies uz //thawkin3-single-spa-demo.herokuapp.com/page1, mēs tagad redzēsim ... lūdzu, cilindru rullīti ... mūsu atjaunināto lietotni!

Demo app — successfully updating one of the micro-frontend apps

Secinājums

Es to teicu jau iepriekš, un es to atkārtošu vēlreiz: Mikro-frontends ir frontend tīmekļa attīstības nākotne.

Ieguvumi ir milzīgi, ieskaitot neatkarīgu izvietošanu, neatkarīgas īpašumtiesības, ātrāku būvēšanas un testēšanas laiku, kā arī iespēju jaukt un saskaņot dažādus ietvarus, ja nepieciešams.

Ir daži trūkumi, piemēram, sākotnējās iestatīšanas izmaksas un sadalītās arhitektūras uzturēšanas sarežģītība, taču es ticu, ka ieguvumi atsver izmaksas.

Viena spa centrs atvieglo mikro-frontend arhitektūru. Tagad arī jūs varat iet nojaukt monolītu!