Friday, January 18, 2019

FaaS tutorial 2: Set up Google Cloud Function

Now that we have deployed an app in FaaS tutorial 1: Start with Firebase and prepare the ground, time to spice up 🌢️ our basic app to add some back-end stuff.

What about defining a REST API to add a new record to the database. We'll use HTTP triggered functions. There are different kind of triggers for different use cases, we dig into that in the next post.

Let's start out tutorial, as always step by steps πŸ‘£!

Step 1: init function

For your project to use Google Cloud Functions (GCF), use firebase cli to configure it. Simply run the command:
$ firebase init functions

     ######## #### ########  ######## ########     ###     ######  ########
     ##        ##  ##     ## ##       ##     ##  ##   ##  ##       ##
     ######    ##  ########  ######   ########  #########  ######  ######
     ##        ##  ##    ##  ##       ##     ## ##     ##       ## ##
     ##       #### ##     ## ######## ########  ##     ##  ######  ########

You're about to initialize a Firebase project in this directory:

  /Users/corinne/workspace/test-crud2


=== Project Setup

First, let's associate this project directory with a Firebase project.
You can create multiple project aliases by running firebase use --add, 
but for now we'll just set up a default project.

? Select a default Firebase project for this directory: test-83c1a (test)
i  Using project test-83c1a (test)

=== Functions Setup

A functions directory will be created in your project with a Node.js
package pre-configured. Functions can be deployed with firebase deploy.

? What language would you like to use to write Cloud Functions? JavaScript
? Do you want to use ESLint to catch probable bugs and enforce style? No
✔  Wrote functions/package.json
✔  Wrote functions/index.js
✔  Wrote functions/.gitignore
? Do you want to install dependencies with npm now? Yes
Below the ascii art 🎨 Firebase gets chatty and tells you all about it's doing.
Once you've selected a firebase project (select the one we created in tutorial 1 with the firestore setup), you use default options (JavaScript, no ESLint).

Note: By default, GCF runs on node6, if you want to enable node8, in your /functions/package.json add the following json at root level:
"engines": {
    "node": "8"
  }
You will need node8 for the rest of the tutorial as we use async instead of Promises syntax.
Firebase has created a default package with an initial GCF bootstrap in functions/index.js.

Step 2: HelloWorld

Go to functions/index.js and uncomment the helloWorld function
exports.helloWorld = functions.https.onCall((request, response) => {
 response.send("Hello from Firebase!");
});
This is a basic helloWorld function, we'll use just to get use to deploying functions.

Step 3: deploy

Again, use firebase cli and type the command:
$ firebase deploy --only functions
✔  functions[helloWorld(us-central1)]: Successful update operation. 
✔  Deploy complete!

Please note that it can take up to 30 seconds for your updated functions to propagate.
Project Console: https://console.firebase.google.com/project/test-83c1a/overview
Note you call also deploy just our functions by adding firebase deploy --only functions:myFunctionName.
If you go to the Firebase console and then in the function tab, you will find the URL where your function is available.



Step 3: try it

Since it's an HTTP triggered function, let's trying with curl:
$ curl https://us-central1-test-83c1a.cloudfunctions.net/helloWorld
Hello from Firebase!
You've deployed and tried your first cloud function. πŸŽ‰πŸŽ‰πŸŽ‰
Let's now try to fulfil the same use-case as per tutorial 1: We want an HTTP triggered function than insert 2 fields in a database collection.

Step 4: onRequest function to insert in DB

  • In function/index.js add the function below:
    const admin = require('firebase-admin');
    admin.initializeApp(); // [2]
    
    exports.insertOnRequest = functions.https.onRequest(async (req, res) => {
      const field1 = req.query.field1; // [2] 
      const field2 = req.query.field2;
      const writeResult = await admin.firestore().collection('items').add({field1: field1, field2: field2}); // [3]
      res.json({result: `Message with ID: ${writeResult.id} added.`}); // [4]
    });
    

    • [1]: import the Firebase Admin SDK to access the Firestore database and initialize with default values.
    • [2]: extract data from query param.
    • [3]: add the new message into the Firestore Database using the Firebase Admin SDK.
    • [4]: send back the id of the newly inserted record.
  • Deploy it with firebase deploy --only functions. This will redeploy both functions.
  • Test it by curling:
    $ curl https://us-central1-test-83c1a.cloudfunctions.net/insertOnRequest\?field1\=test1\&field2\=test2
    {"result":"Message with ID: b5Nw8U3wraQhRqJ0vMER added."}
    
Wow! Even better, you've deployed a cloud function that does something πŸŽ‰πŸŽ‰πŸŽ‰

Note thatif your use-case is to call a cloud function from your UI, you can onCall CGF. Some of the boiler plate around security is taken care for you. Let's try to add an onCall function!

Step 5: onCall function to insert in DB

  • In function/index.js add the function below:
    exports.insertOnCall = functions.https.onCall(async (data, context) => {
      console.log(`insertOnCall::Add to database ${JSON.stringify(data)}`);
      const {field1, field2} = data;
      await admin.firestore().collection('items').add({field1, field2});
    });
    
  • Deploy it with firebase deploy --only functions. This will redeploy both functions.
  • Test it in your UI code. In tutorial 1, step5 we defined a Create component in src/component/index.js, let's revisit the onSumit method:
    onSubmit = (e) => {
        e.preventDefault();
        // insert by calling cloud function
        const insertDB = firebase.functions().httpsCallable('insertOnCall'); // [1]
        insertDB(this.state).then((result) => { // [2]
          console.log(`::Result is ${JSON.stringify(result)}`);
          this.setState({
            field1: '',
            field2: '',
          });
          this.props.history.push("/")
        }).catch((error) => {
          console.error("Error adding document: ", error);
        });
      };
    

    In [1] we passed the name of the function to retrieve a reference, we simply call this function in [2] with json object containing all the fields we need.

Where to go from here?

In this tutorial, you've get acquainted with google function in its most simple form: http triggered. To go further into learning how to code GCF, the best way is to look at existing code: the firebase/functions-samples on GitHub is the perfect place to explore.
In next tutorials we'll explore the different use-cases that fit best a cloud function.

Stay tuned!

Thursday, January 17, 2019

FaaS tutorial 1: Start with Firebase and prepare the ground

As being an organiser of RivieraDEV, I was looking for a platform to host our CFP (call for paper). I've bumped into the open source project conference-hall wandering on twitter (the gossip 🐦 bird is useful from time to time).

The app is nicely crafted and could be used free, even better I've learned afterward, there is an hosted version! That's the one I wanted to use but we were missing one key feature: sending email to inform speaker of the deliberations and provide a way for speakers to confirm their venue.

πŸ’‘πŸ’‘πŸ’‘Open Source Project? Let's make the world 🌎 better by contributing...

On a first look, conference-hall is a web app deployed on Google Cloud Platform. The SPA is deployed using Firebase tooling and make use of firestore database. By contributing to the project, I get acquainted to Firebase. Learning something new is cool, sharing it is even better πŸ€— 🀩

Time to start a series of blogs post on the FaaS subject. I'd like to explore Google functions as a service but also go broader and see how it is implemented in open source world.

In this first article, I'll share with your how to get started configuring a project from scratch in Firebase and how to deploy it to get a ground project to introduce cloud functions in the next post. Let's get started step by step...

Step 1️⃣: Initialise firebase project

Go to Firebase console and Create a firebase project, let's name it test

Step 2️⃣: Use Firestore

  • In left-hand side menu select Database tab, then click Create Database. Follow Firestore documentation if in trouble. The Firebase console UI is quite easy to follow. Note Firestore is still beta at the time of writing.
  • Choose Start in test mode then click Enabled button.
You should be forwarded to the Database explorer, you can now add a new collection items as below:

Step 3️⃣: Boostrap app

We use create-react-app to get an initial react app
npx create-react-app test-crud
cd test-crud
npm install --save firebase
and then we've added firebase SDK.

Insert firebase config

  • We use react-script env variable support
  • In env.local copy variable from firebase console
  • In src/firebase/firebase.js, read env variable and initialise
    const config = {
      apiKey: process.env.REACT_APP_API_KEY,
      authDomain: process.env.REACT_APP_AUTH_DOMAIN,
      databaseURL: process.env.REACT_APP_DATABASE_URL,
      projectId: process.env.REACT_APP_PROJECT_ID,
      storageBucket: process.env.REACT_APP_STARAGE_BUCKET,
      messagingSenderId: process.env.REACT_APP_MESSAGING_SENDER_ID,
    };
    firebase.initializeApp(config);
    
    firebase.firestore().settings(settings);
    
This way you keep your secret safe, not committed in your code 🀫🀫🀫

Step 4️⃣: Add routing

npm install --save react-router-dom
mkdir src/components
touch src/components/create.js
And define route in src/index.js
ReactDOM.render(
  <Router>
    <div>
      <Route exact path='/' component={App} />
      <Route path='/create' component={Create} />
    </div>
  </Router>,
  document.getElementById('root')
);
In the root path, we'll display the list of items. In the the Create component we'll define a form component to add new items to the list.

Step 5️⃣: Access Firestore in the app

Let's define the content of Create component in src/component/index.js
class Create extends Component {
  constructor() {
    super();
    this.ref = firebase.firestore().collection('items'); // [1] retrieve items reference
    this.state = {
      field1: '',
      field2: '',
    };
  }
  onChange = (e) => {
    const state = this.state;
    state[e.target.name] = e.target.value;
    this.setState(state);
  };
  onSubmit = (e) => {
    e.preventDefault();
    const { field1, field2 } = this.state;
    this.ref.add({                                     // [2] Insert by using firestore SDK
      field1,
      field2,
    }).then((docRef) => {
      this.setState({
        field1: '',
        field2: '',
      });
      this.props.history.push("/")
    }).catch((error) => {
      console.error("Error adding document: ", error);
    });
  };

  render() {
    const { field1, field2 } = this.state;
    return (
      <div>
        <div>
          <div>
            <h3>
              Add Item
            </h3>
          </div>
          <div>
            <h4>Link to="/" >Items List</Link></h4>
            <form onSubmit={this.onSubmit}>
              <div>
                <label htmlFor="title">field1:</label>
                <input type="text" name="field1" value={field1} onChange={this.onChange}  />
              </div>
              <div>
                <label htmlFor="title">field2:</label>
                <input type="text" name="field2" value={field2} onChange={this.onChange} />
              </div>
              <button type="submit">Submit</button>
            </form>
          </div>
        </div>
      </div>
    );
  }
}

export default Create;
It seems a lot of code but the key points are [1] and [2] where we use the firestore SDK to add a new item in the database directly from the client app. the call in [2] is going to be revisited in next blog post to make usage of cloud function.

Step 6️⃣: Deploy on firebase

So we build a small test app accessing firestore DB let's deploy it on the cloud with Firebase tooling πŸ‘ !
  • Start running a production build
    $ npm run build
    
  • Install firebase tools
    $ npm install -g firebase-tools
    $ firebase login
    
  • Init function
    $ firebase init
    
    • Step 1: Select the Firebase features you want to use: Firestore Hosting. For now we focus only on deploying ie: hosting the app
    • Step 2: Firebase command-line interface will pull up your list of Firebase projects, where you pick firebase-crud.
    • Step 3: Keep the default for the Database Rules file name and just press enter.
    • Step 4: Pay attention to the question about public directory, which is the directory that will be deployed and served by Firebase. In our case it is build, which is the folder where our production build is located. Type “build” and proceed.
    • Step 5: Firebase will ask you if you want the app to be configured as a single-page app. Say "yes".
    • Step 6: Firebase will warn us that we already have build/index.html. All fine!
  • deploy!
    $ firebase deploy
    ...
    ✔  Deploy complete!
    
        Project Console: https://console.firebase.google.com/project/test-83c1a/overview
        Hosting URL: https://test-83c1a.firebaseapp.com
    


Where to go from there?

In this blog post you've seen how to configure and deploy an SPA on firebase and how to set up a Firestore DB. Next blog post, you'll see how to write you first Google Cloud Function. Stay tuned.

Thursday, September 13, 2018

Unpublish a npm package

Last week, I was playing with semantic-release. Giving your CI control over your semantic release. Sweet. I should dedicate a writing on it (to come later).
Nevertheless, I got in a situation that an erroneous version number get released (wrong commit message). Without a major version bump, a breaking change in the lib won't be reflecting (breaking the whole purpose of semantic release). 😱😱😱😱

Unpublish a "recent" version


If you try to unpublish a version just released:
$ npm publish .
+ launcher-demo@5.0.0
$ npm unpublish launcher-demo@5.0.0                   
- launcher-demo@5.0.0

It's ok! Pff you can do it. πŸ˜…πŸ˜…πŸ˜…πŸ˜…
Now is it possible later to publish the same version?
$ npm publish .                    
npm ERR! publish Failed PUT 400
npm ERR! code E400
npm ERR! Cannot publish over previously published version "5.0.0". : launcher-demo

It makes sense you can't use the same version, so if you update package.json to 5.0.1:
$ npm publish .
+ launcher-demo@5.0.1

Just fine!

Unpublish a "old" version


Let's say I want to unpublish a version released last week:
$ npm unpublish launcher-demo@3.2.8
npm ERR! unpublish Failed to update data
npm ERR! code E400
npm ERR! You can no longer unpublish this version. Please deprecate it instead

Thanks npm for your kind suggestion, let try to deprecate it with an short message:
$ npm deprecate launcher-demo@3.2.8 'erronous version'

At least now the package is visible as deprecated, trying to pull it will display a deprecate warning.
$ npm i launcher-demo@3.2.8
npm WARN deprecated launcher-demo@3.2.8: erronous version


Unpublish policy


"Old", "recent" version. What does it all mean? Let's check the npm unpublish policy

Quote: If the package is still within the first 72 hours, you should use one of the following from your command line:
  • npm unpublish -f to remove the entire package thanks to the -f or force flag
  • npm unpublish @ to remove a specific version

Some considerations:
Once package@version has been used, you can never use it again. You must publish a new version even if you unpublished the old one.
If you entirely unpublish a package, nobody else (even you) will be able to publish a package of that name for 24 hours.

After the one-developer-just-broke-Node buzzy affair in March 2016, the unpublish policies were changed. A 10-lines library used every where should not put the whole JS community down. A step toward more immutability won't arm.

Where to go from there


Error releasing your package?
You've got 72 hours to fix it. πŸ‘πŸ‘πŸ‘πŸ‘
otherwise deprecate it.
Maybe, it's time to automate releasing with your CI. πŸ˜‡πŸ˜‡πŸ˜‡πŸ˜‡


















Sunday, June 25, 2017

Dirty secrets on dependency injection and Angular - part 2

In the previous post "Dirty secrets on dependency injection and Angular - part 1", you've explored how DI at component level, can produce different instances of a service. Then you've experienced DI at module level. Once a service is declared using one token in the AppModule, the same instance is shared across all the modules and components of the app.

In this article, let's revisit DI in the context of lazy-loading modules. You'll see the feature modules dynamically loaded have a different behaviour.

Let's get started...

Tour of hero app


Let's reuse the tour of heroes app that you should be familiar with from our previous post. All source code could be find on github.

As a reminder, in our Tour of heroes, the app displays a Dashboard page and a Heroes page. We've added a RecentHeroCompoent that displays the recently selected heroes in both pages. This component uses the ContextService to store the recently added heroes.

In the previous blog, you've worked your way to refactor the app and introduced a SharedModule that contains RecentHeroCompoent and use the ContextService. Let's refactor the app to break it into more feature modules:
  • DashboardModule to contain the HeroSearchComponent and HeroDetailComponent
  • HeroesModule to contain the HeroesComponent


Features module


Here is a schema of what you have in the lazy.loading.routing.shared github branch:


DashboardModule is as below:
@NgModule({
  imports: [
    CommonModule,
    FormsModule,
    DashboardRoutingModule, // [1]
    HeroDetailModule,
    SharedModule            // [2]
  ],
  declarations: [
    DashboardComponent,
    HeroSearchComponent
  ],
  exports: [],
  providers: [
    HeroService,
    HeroSearchService
  ]
})
export class DashboardModule { }

In [1] you define DashboardRoutingModule.

In [2] you import SharedModule which defines common components like SpinnerComponent, RecentHeroesComponent.

HeroModule is as below:
@NgModule({
  imports: [
    CommonModule,
    FormsModule,
    HeroDetailModule,
    SharedModule,  // [1]
    HeroesRoutingModule
  ],
  declarations: [ HeroesComponent ],
  exports: [
    HeroesComponent,
    HeroDetailComponent
  ],
  providers: [ HeroService ] // [2]
})
export class HeroesModule { }

In [1] you import SharedModule which defines common components like SpinnerComponent, RecentHeroesComponent.
Note in [2] that HeroService is defined as provider in both modules. It could be a candidate to be provided by SharedModule. This service is stateless however. Having multiple instances won't bother us as much as a stateful service.

Last, let's look at AppModule:
@NgModule({
  declarations: [ AppComponent ], // [1]
  imports: [
    BrowserModule,
    FormsModule,
    HttpModule,
    SharedModule,     // [2]
    InMemoryWebApiModule.forRoot(InMemoryDataService),
    AppRoutingModule  // [3]
  ],
  providers: [],      // [4]
  bootstrap: [ AppComponent ],
  schemas: [NO_ERRORS_SCHEMA, CUSTOM_ELEMENTS_SCHEMA]
})
export class AppModule {}

In [1], the declarations section is really lean as most components are declared either in the features module or in the shared module.

In [2], you now import the SharedModule form AppModule. SharedModule is also imported in the feature modules. From our previous post we know, in statically loaded module the last declared token for a shared service wins. There is eventually only one instance defined. Is it the same for lazy-loading?

In [3] we defined the module for lazy loading, more in next section.

In [4], providers section is lean similar to declarations as most providers are defined at module level.

Lazy loading modules


AppRoutingModule is as below:
const routes: Routes = [
  { path: '', redirectTo: '/dashboard', pathMatch: 'full' },
  { path: 'dashboard',  loadChildren: './dashboard/dashboard.module#DashboardModule' }, // [1]
  { path: 'detail/:id', loadChildren: './dashboard/dashboard.module#DashboardModule' },
  { path: 'heroes',     loadChildren: './heroes/heroes.module#HeroesModule' }
]

@NgModule({
  imports: [ RouterModule.forRoot(routes) ],
  exports: [ RouterModule ]
})
export class AppRoutingModule {}

In [1], you'll define lazy load DashboardModule with loadChildren routing mechanism.

Running the app, you can observe the same syndrom as when we define ContextService at component level: DashboardModule has a different instance of ContextService than HeroesModule. This is easily observable with 2 different lists of recently added heroes.

Checking angular.io module FAQ, you can get an explanation for that behaviour:

Angular adds @NgModule.providers to the application root injector, unless the module is lazy loaded. For a lazy-loaded module, Angular creates a child injector and adds the module's providers to the child injector.

Why doesn't Angular add lazy-loaded providers to the app root injector as it does for eagerly loaded modules?
The answer is grounded in a fundamental characteristic of the Angular dependency-injection system. An injector can add providers until it's first used. Once an injector starts creating and delivering services, its provider list is frozen; no new providers are allowed.


What about if you what a singleton shared across all your app for ContextService? There is a way...

Recycle provider with forRoot


Similar to what RouterModule uses: forRoot. Here is a schema of what you have in the lazy.loading.routing.forRoot github branch:



In SharedModule:
@NgModule({
  imports: [
    CommonModule
  ],
  declarations: [
    SpinnerComponent,
    RecentHeroComponent
  ],
  exports: [
    SpinnerComponent,
    RecentHeroComponent
  ],
  //providers: [ContextService], // [1]
  schemas: [NO_ERRORS_SCHEMA, CUSTOM_ELEMENTS_SCHEMA]
})
export class SharedModule {

  static forRoot() {            // [2]
    return {
      ngModule: SharedModule,
      providers: [ ContextService ]
    }
  }
 }

In [1] remove ContextService as a providers. Define in [2] a forRoot method (the naming is an broadly accepted convention) that returns a ModuleWithProviders interface. This interface define a Module with a given list of providers. SharedModule will reuse defined ContextService provider defined at AppModule level.

In all feature modules, imports SharedModule.

In AppModule:
@NgModule({
  declarations: [
    AppComponent
  ],
  imports: [
    BrowserModule,
    FormsModule,
    HttpModule,
    //SharedModule, // [1]
    SharedModule.forRoot(), // [2]
    InMemoryWebApiModule.forRoot(InMemoryDataService),
    AppRoutingModule
  ],
  providers: [],
  bootstrap: [
    AppComponent
  ],
  schemas: [NO_ERRORS_SCHEMA, CUSTOM_ELEMENTS_SCHEMA]
})
export class AppModule {
}

In [1] and [2], replace the SharedModule imports by SharedModule.forRoot(). You should only call forRoot at highest level ie: AppModule level otherwise you will run in multiple instances.

To see the source code, take a look at lazy.loading.routing.forRoot github branch:

Where to go from there


In this blog post you've seen how providers on lazy-loaded modules behaves differently that in an app with eagerly loaded modules.

Dynamic routing brings its lot of complexity and can introduce difficult-to-track bugs in your app. Specially if you refactor from statically loaded modules to lazy loaded ones. Watch out your shared module specially if they provide services.

The Angular team even recommends to avoid providing services in shared modules. If you go that route, you still have the forRoot alternative.

Happy coding!

Friday, June 16, 2017

Dirty secrets on dependency injection and Angular - part 1

Let's talk about Dependency Injection (DI) in Angular. I'd like to take a different approach and tell you the stuff that surprise me when I've first learned them using Angular on larger apps...

Key feature from Angular even since AngularJS (ie: Angular 1.X), DI is a pure treasure from Angular, but injector hierarchy can be difficult to grasp at first. Add routing and dynamic load of modules and all could go wild... Services get created multiple times and if stateful (yes functional lovers, you sometimes need states) the global states (even worse πŸ˜…) is out of sync in some parts of your app.
To get back in control of the singleton instances created for your app singleton, you need to be aware of a few things.

Let's get started...

Tour of hero app


Let's reuse the tour of heroes app that you should be familiar with from when you first started at angular.io. Thansk to LarsKumbier for adapting it to webpack, I've forked the repo and adjust it to my demo's needs. All source code could be find on github.

In this version of Tour of heroes, the app displays a Dashboard page and a Heroes page. I've added a RecentHeroCompoent that displays the recently selected heroes in both pages. This component uses the ContextService to store the recently added heroes.


See AppModule in master branch.

Provider at Component level


Let's go to HeroSearchComponent in src/app/hero-search/hero-search.component.ts file and change the @Component decorator:
@Component({
  selector: 'hero-search',
  templateUrl: './hero-search.component.html',
  styleUrls: ['./hero-search.component.css'],
  providers: [ContextService] // [1]
})
export class HeroSearchComponent implements OnInit {

if you add line [1], you get something like this drawing:



Run the app again.
What do you observe?
The heroes page is working fine listing below the recently visited heroes. However going to Dashboard/SearchHeroComponent, the recently visited heroes list is empty!!

The recently added heroes is empty in HeroSeachComponent because you've got a different instance of ServiceContext. Dependency injection in Angular relies on hierarchical injectors that are linked to the tree of components. This means that you can configure providers at different levels:
  • for the whole application when bootstrapping it in the AppModule. All services defined in providers will share the same instance.
  • for a specific component and its sub components. Same as before but for Γ  specific component. so if you redefine providers at Component level, you got a different instance. You've overriden global AppModule providers.

Tip: don't have app-scoped services defined at component level. Very rare use-cases where you actually want


Provider at Module level


What about providers at module level, if we do something like:



Let's first refactor the code, to introduce a SharedModule as defined in angular.io guide. In your SharedModule, we put the SpinnerComponent, the RecentHeroComponent and the ContextService. Creating the SharedModule, you can clean up the imports for AppModule which now looks like:

@NgModule({
  declarations: [
    AppComponent,
    HeroDetailComponent,
    HeroesComponent,
    DashboardComponent,
    HeroSearchComponent
  ],
  imports: [
    BrowserModule,
    FormsModule,
    HttpModule,
    SharedModule,
    InMemoryWebApiModule.forRoot(InMemoryDataService),
    AppRoutingModule
  ],
  providers: [
    HeroSearchService,
    HeroService,
    ContextService
  ],
  bootstrap: [
    AppComponent
  ]
})
export class AppModule {}

Full source code in github here. Notice RecentHeroComponent and SpinnerComponent has been removed from declarations. Intentionally the ContextService appears twice at SharedModule and AppModule level. Are we going to have duplicate instances?

Nope.
A Module does not have a specific injector (as opposed to Component which gets their own injector). Therefore when AppModule provides a service for token ContextService and imports a SharedModule that also provides a service for token ContextService, then AppModule's service definition "wins". This is clearly stated in AppModule angular.io FAQ.

Where to go from there


In this blog post you've seen how providers on component plays an important role on how singleton get created. Modules are a different story, they do not provide encapsulation as component.
Next blog posts, you will see how DI and dynamically loaded modules plays together. Stay tuned.

Tuesday, May 30, 2017

Going Headless without headache

You're done with a first beta of your angular 4 app.
Thanks to Test Your Angular Services and Test your Angular component, you get a good test suite πŸ€—. It runs ok with a npm test on your local dev environment. Now, is time to automate it and have it run against a CI server: be Travis, Jenkins, choose your weapon. But most probably you will need to run you test headlessly.

Until recently the only way to go is to use PhantomJS, a "headless" browser that can be run via a command line and is primarily used to test websites without the need to completely render a page.

Since Chrome 59 (still in beta), you can now use Chrome headless! In this post we'll see how to go headless: the classical way with PhamtomJS and then we'll peek a boo into Chrome Headless. You may want to wait for official release of 59 (it should be expected to roll out very soon in May/June this year).

Getting started with angular-cli


Let's use angular-cli latest release (v1.0.6 at the time of writing), make sure you have install it in [1].
npm install -g @angular/cli  // [1]
ng new MyHeadlessProject // [2]
cd MyHeadlessProject
npm test // [3]
In [2], create a new project, let's call it MyHeadlessProject.
In [3], run your test. You can see by default the test run in watch mode. If you explore karma.conf.js:
module.exports = function (config) {
  config.set({
    ...
    port: 9876,
    colors: true,
    logLevel: config.LOG_INFO,
    autoWatch: true,
    browsers: ['Chrome'],  // [1]
    singleRun: false       // [2]
  });
If you switch [2] to false, you can go for a single run.
To be headless you would have had to change Chrome for PhantomJS.

Go headless with PhamtomJS


First, install the phantomjs browser and its karma launcher with:
npm i phantomjs karma-phantomjs-launcher --save-dev
Next step is to change the the karma configuration:
browsers: ['PhantomJS', 'PhantomJS_custom'],
customLaunchers: {
 'PhantomJS_custom': {
    base: 'PhantomJS',
    options: {
      windowName: 'my-window',
      settings: {
        webSecurityEnabled: false
      },
    },
    flags: ['--load-images=true'],
    debug: true
  }
},
phantomjsLauncher: {
  exitOnResourceError: true
},
singleRun: true
and don't forgot to import them at the beginning of the file:
plugins: [
  require('karma-jasmine'),
  require('karma-phantomjs-launcher'),
],
Running it you got the error:
PhantomJS 2.1.1 (Mac OS X 0.0.0) ERROR
  TypeError: undefined is not an object (evaluating '((Object)).assign.apply')
  at webpack:///~/@angular/common/@angular/common.es5.js:3091:0 <- src/test.ts:23952
As per this angular-cli issue, go to polyfills.js and uncomment
import 'core-js/es6/object';
import 'core-js/es6/array';
Rerun, tada !
It works!
... Until you run into a next polyfill error. PhantomJS is not the latest, even worse, it's getting deprecated. Even PhantomJS main maintainer Vitali is stepping down as a maintainer recommending to switch to chrome headless. It's always cumbersome to have a polyfilll need just for your automated test suite, let's peek a boo into Headless Chrome.

Chrome headless


First of all, you either need to have Chrome beta installed or have ChromeCanary.
On Mac:
brew cask install google-chrome-canary
Next step is to change the the karma configuration:
browsers: ['ChromeNoSandboxHeadless'],
customLaunchers: {
  ChromeNoSandboxHeadless: {
    base: 'ChromeCanary',
    flags: [
      '--no-sandbox',
      // See https://chromium.googlesource.com/chromium/src/+/lkgr/headless/README.md
      '--headless',
      '--disable-gpu',
      // Without a remote debugging port, Google Chrome exits immediately.
      ' --remote-debugging-port=9222',
    ],
  },
},
and don't forgot to import them at the beginning of the file:
plugins: [
  ...
  require('karma-chrome-launcher'),
],
Rerun, tada! No need to have any polyfill.

What's next?


In this post you saw how to run your test suite headlessly to fit your test automation CI needs. You can get the full source code in github for PhantomJs in this branch, and for Chrome Headless with Canary in this branch. Have fun and try it on your project!

Friday, May 19, 2017

Test your Angular component

In my previous post "Testing your Services with Angular", we saw how to unit test your Services using DI (Dependency Injection) to inject mock classes into the test module (TestBed). Let's go one step further and see how to unit test your components.

With component testing, you can:
  • either test at unit test level ie: testing all public methods. You merely test your javascript component, mocking service and rendering layers.
  • or test at component level, ie: testing the component and its template together and interacting with Html element.
I tend to use both methods whenever it makes the more sense: if my component has large template, do more component testing.

Another complexity introduced by component testing is that most of the time you have to deal with the async nature of html rendering. But let's dig in...

Setting up tests


I'll use the code base of openshift.io to illustrate this post. It's a big enough project to go beyond the getting started apps. Code source could be found in: https://github.com/fabric8io/fabric8-ui/. To run the test, use npm run test:unit.

Component test: DI, Mock and shallow rendering


In the previous article "Testing your Services with Angular", you saw how to mock service through the use of DI. Same story here, in TestBed.configureTestingModule you define your testing NgModule with the providers. The providers injected at NgModule are available to the components of this module.

For example, let's add a component test for CodebasesAddComponent a wizard style component to add a github repository in the list of available codebases. First, you enter the repository name and hit "sync" button to check (via github API) if the repo exists. Upon success, some repo details are displayed and a final "associate" button add the repo to the list of codebases.

To test it, let's create the TestBed module, we need to inject all the dependencies in the providers. Check the constructor of the CodebasesAddComponent, there are 7 dependencies injected!

Let's write TestBed.configureTestingModule and inject 7 fake services:
beforeEach(() => {
  broadcasterMock = jasmine.createSpyObj('Broadcaster', ['broadcast']);
  codebasesServiceMock = jasmine.createSpyObj('CodebasesService', ['getCodebases', 'addCodebase']);
  authServiceMock = jasmine.createSpy('AuthenticationService');
  contextsMock = jasmine.createSpy('Contexts');
  gitHubServiceMock = jasmine.createSpyObj('GitHubService', ['getRepoDetailsByFullName', 'getRepoLicenseByUrl']);
  notificationMock = jasmine.createSpyObj('Notifications', ['message']);
  routerMock = jasmine.createSpy('Router');
  routeMock = jasmine.createSpy('ActivatedRoute');
  userServiceMock = jasmine.createSpy('UserService');

  TestBed.configureTestingModule({
    imports: [FormsModule, HttpModule],
    declarations: [CodebasesAddComponent], // [1]
    providers: [
      {
        provide: Broadcaster, useValue: broadcasterMock // [2]
      },
      {
        provide: CodebasesService, useValue: codebasesServiceMock
      },
      {
        provide: Contexts, useClass: ContextsMock // [3]
      },
      {
        provide: GitHubService, useValue: gitHubServiceMock
      },
      {
        provide: Notifications, useValue: notificationMock
      },
      {
        provide: Router, useValue: routerMock
      },
      {
        provide: ActivatedRoute, useValue: routeMock
      }
    ],
    // Tells the compiler not to error on unknown elements and attributes
    schemas: [NO_ERRORS_SCHEMA] // [4]
  });
  fixture = TestBed.createComponent(CodebasesAddComponent);
 });

In line [2], you use useValue to inject a value (created via dynamic mock with jasmine) or use a had crafted class in [3] to mock your data. Whatever is convenient!

In line [4], you use NO_ERRORS_SCHEMA and in line [1] we declare only one component. This is where shallow rendering comes in. You've stubbed services (quite straightforward thanks to Dependency Injection in Angular). Now is the time to stub child components.

Shallow testing your component means you test your UI component in isolation. Your browser will display only the DOM part that directly belongs to the component under test. For example, if we look at the template we have another component element like alm-slide-out-panel. Since you declare in [1] only your component under test, Angular will give you error for unknown DOM element: therefore tell the framework it can just ignore those with NO_ERRORS_SCHEMA.

Note: To compile or not to compile TestComponent? In most Angular tutorials, you will see the Testbed.compileComponents, but as specified in the docs this is not needed when you're using webpack.

Async testing with async and whenStable


Let's write your first test to validate the first part of the wizard creation, click on "sync" button, display second part of the wizard. See full code in here.
fit('Display github repo details after sync button pressed', async(() => { // [1]
  // given
  gitHubServiceMock.getRepoDetailsByFullName.and.returnValue(Observable.of(expectedGitHubRepoDetails));
  gitHubServiceMock.getRepoLicenseByUrl.and.returnValue(Observable.of(expectedGitHubRepoLicense)); // [2]
  const debug = fixture.debugElement;
  const inputSpace = debug.query(By.css('#spacePath'));
  const inputGitHubRepo = debug.query(By.css('#gitHubRepo')); // [3]
  const syncButton = debug.query(By.css('#syncButton'));
  const form = debug.query(By.css('form'));
  fixture.detectChanges(); // [4]

  fixture.whenStable().then(() => { // [5]
    // when github repos added and sync button clicked
    inputGitHubRepo.nativeElement.value = 'TestSpace/toto';
    inputGitHubRepo.nativeElement.dispatchEvent(new Event('input'));
    fixture.detectChanges(); // [6]
  }).then(() => {
    syncButton.nativeElement.click();
    fixture.detectChanges(); // [7]
  }).then(() => {
    expect(form.nativeElement.querySelector('#created').value).toBeTruthy(); // [8]
    expect(form.nativeElement.querySelector('#license').value).toEqual('Apache License 2.0');
  });
}));

In [1] you see the it from jasmine BDD has been prefixed with a f to focus on this test (good tip to only run the test you're working on).

In [2] you set the expected result for stubbed service call. Notice the service return an Observable, we use Observable.of to wrap a result into an Observable stream and start it.

In [3], you get the DOM element, but not quite. Actually since you use debugElement you get a helper node, you can always call nativeElement on it to get real DOM object. As a reminder:
abstract class ComponentFixture {
  debugElement;       // test helper 
  componentInstance;  // access properties and methods
  nativeElement;      // access DOM
  detectChanges();    // trigger component change detection
}

In [4] and [5], you trigger an event for the component to be initialized. As the the HTML rendering is asynchronous per nature, you need to write asynchronous test. In Jasmine, you used to write async test using done() callback that need to be called once you've done with async call. With angular framework you can wrap you test inside an async.

In [6] you notify the component a change happened: user entered a repo name, some validation is going on in the component. Once the validation is successful, you trigger another event and notify the component a change happened in [7]. Because the flow is synchronous you need to chain your promises.

Eventually in [8] following given-when-then approach of testing you can verify your expectation.

Async testing with fakeAsync and tick


Replace async by fakeAsync and whenStable / then by tick and voilΓ ! In here no promises in sight, plain synchronous style.
fit('Display github repo details after sync button pressed', fakeAsync(() => {
  gitHubServiceMock.getRepoDetailsByFullName.and.returnValue(Observable.of(expectedGitHubRepoDetails));
  gitHubServiceMock.getRepoLicenseByUrl.and.returnValue(Observable.of(expectedGitHubRepoLicense));
  const debug = fixture.debugElement;
  const inputGitHubRepo = debug.query(By.css('#gitHubRepo'));
  const syncButton = debug.query(By.css('#syncButton'));
  const form = debug.query(By.css('form'));
  fixture.detectChanges();
  tick();
  inputGitHubRepo.nativeElement.value = 'TestSpace/toto';
  inputGitHubRepo.nativeElement.dispatchEvent(new Event('input'));
  fixture.detectChanges();
  tick();
  syncButton.nativeElement.click();
  fixture.detectChanges();
  tick();
  expect(form.nativeElement.querySelector('#created').value).toBeTruthy();
  expect(form.nativeElement.querySelector('#license').value).toEqual('Apache License 2.0');
}));


When DI get tricky


While writing those tests, I hit the issue of a component defining its own providers. When your component define its own providers it means it get its own injector ie: a new instance of the service is created at your component level. Is it really what is expected? In my case this was an error in the code. Get more details on how dependency injection in hierarchy of component work read this great article.

What's next?


In this post you saw how to test a angular component in isolation, how to test asynchronously and delve a bit in DI. You can get the full source code in github.
Next post, I'll like to test about testing headlessly for your CI/CD integration. Stay tuned. Happy testing!