Posts Tagged C++

Those red thingies with the X in them

Sometimes the hardest part about finding the solution to a problem is finding the right terminology with which to characterize the problem.

I have been using SlickEdit 2009’s Build command to build my C++ project.  The major reasons I commonly kick off the build from within the IDE (as opposed to from the command line) are:

  • I can double-click on an error and jump to it; and
  • Little red thingies with an X in them appear along the left side of the editor window, providing a quick visual of where the errors are on the current screen:

    (It would be even nicer if there I could also get a view of where all the errors are in the current file, like Eclipse (or was it WinMerge?) provides.)  I can hover over one of the red thingies to see all the errors for that line.

Today, the red thingies weren’t there.  I could still build; I could still double-click on errors in the captured build output in the Build window; but I had to actually read things because the helpful visual wasn’t there.

Error Markers

It turns out that the red thingies with an X in them are called error markers, and you can manually call the set-error-markers command (hit Esc to get the SlickEdit command prompt) to make them show up.  (I didn’t want to have to do this manually every time I build, though it is pretty intriguing to have access to such business logic functions.)

In the end, I was able to get the error markers to show up automatically again…by simply restarting  SlickEdit.

Acknowledgments

These posts to the SlickEdit forums helped me get to the right terminology and find the set-error-markers command.

Advertisements

,

Leave a comment

AQTime for C++ code coverage analysis

There’s an area of the system that I’m about to make big changes to.  I had started off by creating characterization tests as a way of understanding how the current system works while also weaving a safety net that will quickly give me feedback on the effects of my code changes.

I have a good suite of tests and was to the point where it would be helpful to me to know what areas of the code are still not exercised by any tests.  I was hankering for a code coverage tool.  I had tasted the goodness of EclEmma in Java/Eclipse land — “Is there anything available for C++?” I wondered.

I resisted stopping to learn a new tool.

I worried that support for C++ might be clunkier, less seamless than what I had experienced in Java.

I wondered what the company approval process would involve to purchase a license.

BUT, the alternatives seemed to be:

  1. Visually trace through my existing tests to get an idea of the code coverage, or
  2. Proceed without knowing what can break without the tests showing it

The tedium and error-proneness of alternative 1 combined with the spectre alternative 2 presented of an extended stabilization time trying to figure out afterwards what my code changes had caused overcame my reluctance, worries, and wonderings.  Besides, if I could get a code coverage solution in place, next time I could conceivably just use it, whereas if I do alternative 1 or 2 this time I’ll be faced with this same dilemma next time.

OHHHHkay.  I was moved to go ahead and try to figure this out.

Not as bad as I thought

It didn’t turn out nearly so bad as I feared.

When I asked around internally I found that as a company we already have some experience with a product called AQTime from a company called AutomatedQA.

Having gotten a license, I installed AQTime 6.3.0.

(Let me pause to mention that I find the approval process here to be remarkable.  The IT department seems to have something in common with Jimmy John’s: “Service so fast you’ll freak”.)

1. Gathering the Coverage Data

I looked confusedly and dazedly at a few tutorials, then decided to just try something.  Following is what I did (minus the dead ends and rabbit trails):

  1. (Previously) Had written a suite of unit tests (using the Boost unit test framework) and built the test executable (it might need to be a debug build)
  2. Dropped to a command prompt and ran the command we use to set up environment variables for our build environment
  3. Started AQTime from that command prompt (otherwise the DLLs needed by the test executable were not in the path, so AQTime was not able to start the executable)
  4. File -> New Project From Module… -> browsed to the unit test executable
  5. Selected the Coverage Profiler (I think the default was the Performance Profiler)
  6. In the Setup tab on the AQTime main window, expanded the tree view of my executable.  In my case, my tests focus on the contents of one .obj file, so I context-clicked on that and said Add Selected to Area -> Add to New Area…

    (The profiling level needs to be set to “Line” for code coverage profiling (the default is Routine level))
  7. Pressed the Run button (the program run lasted 10 or 12 seconds instead of the usual of about 2 seconds from the command line)

2. Displaying the Results

I initially had trouble finding the visual coverage output with the source code.  It is available — I got to it thusly:

  1. Went to AQtime’s Results tab
  2. Double-clicked the thread under Last Results -> Source Files
  3. Selected the source file in the Report tab
  4. Clicked Editor in the Analyze Results panel

This Editor window provided just what I had envisioned.  Now I can quickly see what’s covered, sometimes covered, and not covered. I browsed through, looking for red dots.

The first coverage gaps I saw were in error logging code:

I knew that my tests were probably not exercising these error logging sections, but this is a good confirmation and reminder to me.  Those sections being lower risk, I’m not sure whether I’ll take the time to exercise them all… now I can make a more informed decision.

More interestingly though, here is a whole branch I didn’t realize I wasn’t testing:

This coverage check quickly showed me five to ten such branches that lack tests.  These are areas where if I made changes to the production code, I could too easily miss finding out about the breakage.  Good to know about them!

And next time I want a test coverage check, I shouldn’t have to go through all the preliminaries.  I’m glad to have a coverage checker in place.

Acknowledgment

I found the AQtime application help to be quite…helpful in getting up and running, as I was not familiar with the concept of an “area” or how to get to an editor.

, , ,

1 Comment

The return of chui red/green indicators

Automated unit testing is not yet to a mature state here on the C++ side.  We’re kind of just getting started*.  Part of just getting started is a lack of tool support.  Funny – when Jon talked about the importance of maintaining a readiness to create tools that streamline the work in your specific project or environment, I wasn’t sure if he was on track or not.  How important is it, really?

Then I realized that I’m doing the same thing in my current environment. So I guess either I’m as crazy as Jon** or Jon’s onto something here.  :)

On to my issue.  Our C++ unit test support is currently command-line based, and it puts out a lot of text to the screen.  I found I was having to carefully cull through a few screens of data looking for what went right or wrong.  It was too much work for each test run.  I decided to colorize.  ANSI escape sequences to the rescue! (putting to use our research from before.)

The script

# chuiredgreenbar.py
#
# Purpose: Colorize the output of dmake runtest to highlight passing and failing tests.
# Author: Daniel Meyer
# Version: 0.1
# Date: 10/28/2009
#
# Usage:
#   dmake runtest 2>&1 | python c:\bin\chuiredgreenbar.py | cat
#
# TODO: Figure out a way to get ANSI escape sequence support without cat (On my XP machine, cmd.exe doesn't
# seem to natively support the ANSI escape sequences; cat works around this).
import sys, re

def green_bar(s):
    return chr(27) + "[32;40m" + s + chr(27) + "[0m"

def red_bar(s):
    return chr(27) + "[31;40m" + s + chr(27) + "[0m"

pass_pattern = re.compile("passed")
fail_pattern = re.compile("([0-9]+).+[0-9]+ (failed|aborted)")
for raw_line in sys.stdin:
    line = raw_line[:-1]
    result = fail_pattern.search(line)
    if result:
        if result.group(1) == 0:
            print green_bar(line)
        else:
            print red_bar(line)
    elif pass_pattern.search(line):
        print green_bar(line)
    else:
        print line

The command line

Filter the output of dmake through this script with the following command line (cover your eyes):

dmake runtest 2>&1 | python c:\bin\chuiredgreenbar.py | cat

Example output

Observations

  1. The red background is not part of the highlighting – that’s another tool thing I won’t get into here.
  2. I barely know Python
  3. The super hacky part is having to pipe the output to cat to get ANSI escape sequences interpreted (not to mention merging stderr in with stdout since some of dmake’s output goes one place, some the other and we need to process both)

This warty construction is “in production” on my PC – I use it ‘most every day.  If it weren’t for something like this, I would probably still be straining my eyes to see which test cases failed.  Perhaps eventually there will be a graphical UI for the Boost unit test output; but till then, this was an efficient way to fill a need.


*I could look at this negatively, but hey – I get to be part of bringing this discipline to my company, and  besides that, there’s openness to the idea.   Smells like opportunity to me!

**I would consider that designation an honor.  We need more of that kind of crazy in the industry!

, ,

1 Comment

The main problem was…

Just having started at a new company recently, up to this point I had only contributed to an existing C++ project; but yesterday I needed to create a new one myself.  After getting a basic makefile put together and putting some necessaries in the precompiled header, I bumped into a couple of linker errors:

link.exe @C:\DOCUME~1\DANIEL~1.MEY\LOCALS~1\Temp\mk1178003 /out:"DebugU\MyUtilityUD.exe"
LINK : DebugU\MyUtilityUD.exe not found or not built by the last incremental link; performing full link
   Creating library DebugU\MyUtilityUD.lib and object DebugU\MyUtilityUD.exp
main.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) int __cdecl ir::recorderserver::ace_os_main_i(class ACE_Main_Base &,int,char * * const)" (__imp_?ace_os_main_i@recorderserver@ir@@YAHAAVACE_Main_Base@@HQAPAD@Z) referenced in function "int __cdecl ir::recorderserver::wmain(int,wchar_t * * const)" (?wmain@recorderserver@ir@@YAHHQAPA_W@Z)
msvcrtd.lib(wcrtexe.obj) : error LNK2019: unresolved external symbol _wmain referenced in function ___tmainCRTStartup
DebugU\MediaStatusGenUD.exe : fatal error LNK1120: 2 unresolved externals

I had a main() function.  Did ACE expect _wmain() ?  Or some weird ace_os_main_i()?

No, I had just placed my main() function inside a couple of namespaces rather than at the global level.  Doh!  Of course, main() needs to be declared in the global namespace.

Noting here so next time I don’t spend an hour of futile fiddling with the makefile for this.  ;)

Leave a comment

Log file goodness

I’m learning to see the helpfulness of good logging.  There’s already a good logging discipline here around me.  My program was terminating abnormally and I couldn’t figure out where the problem was coming from.  I fired up the debugger* and was able to figure out what the problem was.  Then I looked back in the log files to see if I could have divined from the log output where the issue was.  Yes, I could have.  Here’s the end of the log file:

log-file

Notice all the functions being entered, but not left?  (the squares with arrow pointing to right)  We can see that about the last thing that’s in process is loading from an xml file.  I checked my machine and …I was missing that xml file.  I could have saved myself a lot of wasted time fiddling with my source code, reverting to previous versions, rebooting machines, and the like, if I had been able to recognize the signs of something wrong when I looked at this the first time.

*(without copying source code — how that worked is worthy of a separate post)

,

Leave a comment

replace_all for C++

I’m always surprised by some of the basic functionality C++’s standard library lacks.    This week’s: a function to replace all occurrences within string x of string a with string b.  Of course it can be done, it’s just unwieldy given only the standard library facilities.

That’s why I’m so thankful for the Boost libraries.  Looks like <boost/algorithm/string/replace.hpp>’s replace_all() or replace_all_copy() would do the trick!

Leave a comment

Is it an assigment or a copy construct?

Suppose we have the following code:

myclass a;
myclass b = a;

Does the  second line invoke b’s default constructor and then the assignment operator?  Wouldn’t it be more efficient to rewrite it like this:

myclass b(a);

Actually, though, the first and second versions are equivalent: both result in a single copy constructor call; neither one uses the assignment operator.

Let’s try an example to demonstrate this.

#include <iostream>

class myclass
{
private:
    std::string innards;
public:
    myclass& operator=(const myclass& c);
    myclass(const myclass& c);
    myclass();
    ~myclass();
};

myclass& myclass::operator=(const myclass& c)
{
    std::cout << "myclass::operator=\n";
    if(&c != this)
    {
        innards = c.innards;
    }
    return *this;
}

myclass::myclass(const myclass& c)
 : innards(c.innards)
{
    std::cout << "myclass::myclass(const myclass& c)\n";
}

myclass::myclass()
{
    std::cout << "myclass::myclass()\n";
}

myclass::~myclass()
{
    std::cout << "myclass::~myclass()\n";
}

int main(void)
{
    myclass a;
    myclass b = a;
    return 0;
}

The output is:

myclass::myclass()
myclass::myclass(const myclass& c)
myclass::~myclass()
myclass::~myclass()

A default construct, a copy construct, and two destructs.

Leave a comment