EzDevInfo.com

istanbul

Yet another JS code coverage tool that computes statement, line, function and branch coverage with module loader hooks to transparently add coverage when running tests. Supports all JS coverage use cases including unit tests, server side functional tests and browser tests. Built for scale.

Add code coverage to Jasmine testing in browser

I use code coverage with npm and grunt locally, but I want to demonstrate this in browser.

If I open a codepen how can I have code coverage generated in browser?
Please show an example of this.

Here I'm testing a controller and Jasmine tests the code but I would like to know if it's 100% covered and show that in browser. http://codepen.io/clouddueling/pen/Jwaru?editors=001

Could I submit my code to a server? Have it tested instantly elsewhere like on heroku and get the results? Can Instanbul run in the client somehow and output an html report or json string?


Source: (StackOverflow)

Generate two coverage reports in a single jenkins build

I have a jenkins build which build all my java/angularJS project. It launch testNG tests for the java part and karma tests for the javascript part. So I can generate one testNG report (for java) and one junit report (for karma test) in my jenkins build. This is working very well.

Until now, I used cobertura to report the coverage of my java tests. But now I would like to add also a coverage report for my karma tests (generated by Istanbul with cobertura type). The problem is that, in Jenkins, I'm allowed to generate only one coverage report in a build (I can't add more that one 'publish cobertura coverage report' post build action). So how can I have these two coverage reports in a single jenkins build ?

Thanks a lot


Source: (StackOverflow)

Advertisements

Code Coverage for Istanbul Wrong when using Sandbox in Nodeunit

I have written a bunch of tests using nodeunit to test my code. In doing so I wanted to mock out modules being required by the code under test. Instead of changing the code to make it more easily testable with mocks, inversion of control, when it wasn't needed, I instead used nodeunits sandbox function.

Example

var nodeunit = require("nodeunit");
export.MyTest = {
  test1(test) {
    var fakeGlobals = {
      require: function(filename) {
        if (filename == "CoolUtil.js") {
          return { doit: function wasCool() { return true; } };
        } else {
          return require(filename);
        }
      }
    };
    var testSubject = nodeunit.utils.sandbox("ModuleUnderTest.js", fakeGlobals);
    test.equals(42, testSubject.doSomethingCoolUsingCoolUtil(), "Worked");
    test.done();
  }
}

Istanbul is giving me the wrong coverage report numbers. I tried using the flag --post-require-hook which is said to be used with RequireJS, which I'm fine with switching to but haven't learned yet.

test/node_modules/.bin/istanbul cover --v --hook-run-in-context --root test/node_modules/.bin/nodeunit -- --reporter junit --output target/results/unit_tests test

Has anybody been successful with nodeunit, istanbul and using the sandbox feature in nodeunit?


Source: (StackOverflow)

Jasmine unit test case for $routeChangeStart in AngularJS

Hi I am building an app using AngularJS and now I am on to the unit testing my application. I know how to write unit test cases for services, controllers etc. But I don't know to write it for $routeChangeStart.

I have following code in my app.js

app.run(function ($rootScope, $location, AuthenticationService) {
    $rootScope.$on('$routeChangeStart', function () {
        if (AuthenticationService.isLoggedIn()) {
            $rootScope.Authenticated = 'true';
            $rootScope.Identity = localStorage.getItem('identification_id');
        } else {
            $rootScope.Authenticated = 'false';
            $rootScope.Identity = localStorage.removeItem('identification_id');
        }
    });
});

I have written this code to find out whether the user is logged in or not for each routing in my app. I have written a service AuthenticationService for this purpose like;

app.factory('AuthenticationService', function (SessionService) {
    return {
        isLoggedIn: function () {
            return SessionService.get('session_id');
        }
    };
});

And my session service like;

app.factory('SessionService', function () {
    return {
        get: function (key) {
            return localStorage.getItem(key);
        }
    };
});

I am using Jasmine to write test cases and using Istanbul for code coverage. When I run my test using Grunt I am getting something like this in my app.js;

enter image description here

It's because I am not covering these statements in my test cases as I don't know how to write test case for this particular piece of code. Any suggestions?


Source: (StackOverflow)

Code coverage on JSX tests with Istanbul

I am trying to instrument my code to get some coverage up and running, but something is slipping through my fingers.

I launch istanbul with:

node_modules/.bin/istanbul cover ./node_modules/mocha/bin/_mocha -- -u exports -R spec

And my mocha.opts looks like this:

app/assets/javascripts/components/**/*-mocha.jsx
--compilers jsx:mocha/compiler.js

Everything seems to run fine (the tests run, at least), but the only coverage that I get is on the files used to compile the JSX to JS (used in compiler.js

compiler.js                 100%
jsx-stub-transform.js       65% 

Terribly useful...

Any ideas?


Source: (StackOverflow)

sending arguments to test script with istanbul

I'm trying to see my Mocha test code coverage with istanbul, but I need to supply the --recursive arguments to the test script _mocha, because else it only runs the tests in the test directory.

I tried istanbul cover "_mocha --recursive" but it says Unable to resolve file [_mocha --recursive].


Source: (StackOverflow)

How to get code coverage information using Node, Mocha

I've recently started getting into unit testing for my Node projects with the help of Mocha. Things are going great so far and I've found that my code has improved significantly now that I'm thinking about all the angles to cover in my tests.

Now, I'd like to share my experience with the rest of my team and get them going with their own tests. Part of the information I'd like to share is how much of my code is actually covered.

Below is a sample of my application structure which I've separated into different components, or modules. In order to promote reuse I'm trying to keep all dependencies to a minimum and isolated to the component folder. This includes keeping tests isolated as well instead of the default test/ folder in the project root.

| app/
| - component/
| -- index.js
| -- test/
| ---- index.js

Currently my package.json looks like this. I'm toying around with Istanbul, but I'm in no way tied to it. I have also tried using Blanket with similar levels of success.

{
  "scripts": {
    "test": "clear && mocha app/ app/**/test/*.js",
    "test-cov": "clear && istanbul cover npm test"
}

If I run my test-cov command as it is, I get the following error from Istanbul (which is not helpful):

No coverage information was collected, exit without writing coverage information

So my question would be this: Given my current application structure and environment, how can I correctly report on my code coverage using Istanbul (or another tool)?


TL;DR

How can I report on my code coverage using Node, Mocha, and my current application structure?


EDIT

To be clear, Mocha is running tests correctly in this current state. Getting the code coverage report is what I'm struggling with getting to work.

EDIT 2

I received a notification that another question may have answered my question already. It only suggested installing Istanbul and running the cover command, which I have done already. Another suggestion recommends running the test commands with _mocha, which from some research I have done is to prevent Istanbul from swallowing the flags meant for Mocha and is not necessary in newer versions of Mocha.


Source: (StackOverflow)

How to get karma-coverage (istanbul) to check coverage of ALL source files?

The code structure

I have an app directory structure like

scripts/sequoia/                              
├── GraphToolbar.js                     
├── nodes                                     
│   ├── activityNode.js                       
│   └── annotationNode.js                     
├── OverviewCanvasControl.js                  
└── settings                                  
    ├── GraphControlSettingsFactory.js        
    └── SnapContextFactory.js                 

My test directory current looks thus

test/spec/                                        
├── GraphToolbarSpec.js                           
├── settings                                      
│   ├── graphControlSettingsFactorySpec.js        
│   └── snapContextFactorySpec.js                 
└── test-main.js

Note that I only have GraphToolbar and the settings/ files covered so far; there are no tests yet for OverviewCanvasControl.js or the nodes/ files.

The karma config

In my karma.conf.js (coverage refers to karma-coverage):

preprocessors: {                     
  'scripts/sequoia/**/*.js': ['coverage']
},                                   
reporters: ['progress','coverage'],

The problem

When I run karma, the coverage preprocessor & reporter run, but it only checks the files that already have specs written. I want to be reporting 0% coverage for OverviewCanvasControl.js and the nodes/ files that have no coverage. When a new file is created & karma is run, I want it to catch that that file has no Spec yet.

How can I get Karma to check all matching source files for coverage, not just those with specs already created?


Source: (StackOverflow)

Ember CLI and code coverage

Has anyone managed to get code coverage working with an Ember CLI project?

I've tried using blanket.js and istanbul, as have others here, here and here, neither with any success. I've managed to get each to actually produce a coverage report but the report either says 0% (istanbul) or 100% (blanket.js), and there's no way the current tests provide 100% coverage.

The built project JavaScript file that Ember CLI produces contains all of the project's source files with each file's contents being output onto one, sometimes massive, line. So even if the coverage tool was able to produce the actual coverage metrics for the code in the built file there's then the issue of viewing the results. God only knows how you would link this back to the original source files?

Ember CLI is great and seems popular so I'm surprised more people haven’t had this issue. Perhaps others aren't as bothered by code coverage or maybe most just get it working without issue and I'm missing something.


Source: (StackOverflow)

Full Gulp Istanbul Coverage Report

I am using gulp-istanbul to generate JavaScript unit test coverage reports through Gulp. Is there a way to configure Istanbul to generate a full coverage report of all the JS files in my gulp stream, and not just the files touched by a test case.

I'm working on a project with a lot of JS, but no unit tests, and we are trying to increase the test coverage. I would like to have a coverage report that starts by show 0% coverage for most of our files, but over time will present an increasing coverage percentage.

gulp.task( 'test', function () {
    gulp.src( [ my source glob ] )
        .pipe( istanbul() )
        .on( 'end', function () {
            gulp.src( [ my test spec glob ] )
                .pipe( mocha( {
                    reporter: 'spec'
                } ) )
                .pipe( istanbul.writeReports(
                    [ output location ]
                ) );
        } );
} );

Source: (StackOverflow)

coveralls github integration (with qunit, istanbul, grunt)

Im having issues getting coveralls to work. I've created a simple project here.

https://github.com/thorst/Grunt.Qunit.istanbul

It seems to be outputting the report correctly, but I'm definitely missing a step somewhere because coveralls doesn't see me as being set up. No branches show up, and it simply gives instructions on how to set it up. Of course I've tried my best to follow this, but am missing something.

I've tried to copy what qunit is doing, because they obviously have it working. Again, I'm missing something.

https://github.com/jquery/qunit

Any help would be amazing. Google is sort of light on these topics...

Update:

Here is what I've done so far.

  1. Create project that uses node/grunt/qunit
  2. Create coveralls account and toggle on the project
  3. Replace qunit reference in devDependencies section in package.json with "grunt-coveralls": "0.3.0", "grunt-qunit-istanbul": "^0.4.0"
  4. Add this to package.json "scripts": { "ci": "grunt && grunt coveralls" }
  5. Add config for qunit in Gruntfile.js options: { timeout: 30000, "--web-security": "no", coverage: { src: [ "src/<%= pkg.name %>.js" ], instrumentedFiles: "temp/", coberturaReport: "report/", htmlReport: "build/report/coverage", lcovReport: "build/report/lcov", linesThresholdPct: 70 } },
  6. Updated .travis.yml

``` language: node_js

node_js:
  - "0.10"
before_install: 
 npm install -g grunt-cli
install: 
 npm install
before_script: 
 grunt
after_script:
 npm run-script coveralls

```


Source: (StackOverflow)

Event callback's code coverage

I use Karma (currently v0.10.10) and Jasmine for my unit tests, and Istanbul (via karma-coverage) for code coverage reports. I've noticed a strange behaviour of the code coverage reporter in a particular case.

The code I'm trying to test is roughly this:

/**
 * @param {HTMLInputElement} element
 */
var foo = function(element) {
    var callback = function() {
        // some code
    };

    element.addEventListener("input", callback);
};

In my test, I dispatch a custom input event on the tested element and the callback function executes. The test checks the effects of the callback, and the test passes. In fact, even when I put a hairy console.log("foo") in the callback, I can clearly see it being printed out. However, Istanbul's report erroneously indicates that the callback was not executed at all.

Modifying the tested code to use an anonymous function in the event listener's callback fixes the misbehaviour:

element.addEventListener("input", function() {
    callback();
});

However, I utterly despise "solutions" that modify the application's code to compensate for a code quality control tool's deficiency.

Is there a way I can make the code coverage get picked up correctly without wrapping the callback in an anonymous function?


Source: (StackOverflow)

Generating istanbul code coverage reports for jasmine tests run (via grunt) on a browserify bundle in phantomjs

The title says it all really. Despite trawling the internet I haven't found a single example of a solution to this problem.

Here are some near misses

Here is my in-progress code https://github.com/wheresrhys/on-guard/tree/browserify (note it's the 'browserify' branch - Gruntfile.js is a bit of a mess but will tidy it up shortly). My initial investigations using console.log indicate that somehow bundle.src.js is being loaded in the page but when the tests are run (and passed!) the code in bundle.src.js isn't being run, so I have a feeling it might be an aliasing problem... though one that's limited to phantomjs as when I open the specrunner in chrome the code is getting run.


Source: (StackOverflow)

Exclude function (not an entire file) from JavaScript code coverage

I'm creating some unit tests with Jasmine and the test runner I'm using is Karma. I'm also checking the code coverage of these test specs with the karma-coverage plugin.

I was wondering if there's any way of excluding certain functions from the code coverage itself and also from the Karma report (Istanbul actually). I'm thinking that if the first one is solved then so is the second.

Pretty sure there's no obvious way of doing this, as I've looked in Istanbul as well (karma-coverage uses it) but maybe some of you run into this before.


Source: (StackOverflow)

Code coverage for AngularJS html templates

We are using istanbul for code coverage in our karma tests. This works great for tracking code coverage of our unit tests in JavaScript. However, this does not track code coverage in our HTML templates.

We have very little logic in our templates, but there is still complexity that we want to track and ensure we have properly covered in our tests. What are the best practices to ensure that you have proper coverage over all of your HTML templates. In our particular case, we use ng-if and ng-switch. We'd like to ensure that all branches are properly covered.


Source: (StackOverflow)