Thursday, September 18, 2014

Making WAR!

Objective

Find a build system that will allow me to make JAR and WAR files, import dependencies from Maven repository, copy config files and dependencies, validate the environment and run dbupdate.  A very ordinary web application environment.

-

UPDATE: September 2021

Gradle is a lot more mature at this point and a preferred way to go, everything below is in the context of 2014.

---


My development environment

At the time of writing

Kubuntu
git
JetBrains IntelliJ IDEA 13.x
Tomcat 8.x server
PostgreSQL 9.x


What the build system needs to do for me

1. Build WAR file that will be used in deployment (by Jenkins or similar)

2. Manage dependent files using Maven repository

3. Create my custom CATALINA_BASE directory to avoid polluting tomcat install with custom jars which makes upgrading tomcat non-trivial with project specific configurations such as:

  • DB connection pool
  • distributed session management (based on memcached or similar)
  • custom port (8080 is overused)
  • tomcat global jars
  • custom log4j configuration (with more info)

4. Reset database (drop database, create database, run bunch of SQL scripts using dbupdate)

5. Validate environment (make sure necessary applications and config files are in the right place), this is useful when you have new people getting set up on the environment or to check if everything is running as expected

6. Run unit and integration tests

Contenders

  • Maven
  • Gradle
  • Ant with Ivy and python


Maven


If there was a scale to how rigid a build system is, Maven would be the most rigid.  It is either the Maven way or you are in for a world of hurt.  Anything custom needs to be written in java as a plugin.  Once you define a project in Maven you can easily import it into IDEA and everything just works.  It will build the WAR file for you.  It will run unit tests.  It will manage dependencies.  When it works, it works very well.

The real problem came up when I tried to do things like copy files or execute external tasks, everything is a plugin with a very complicated XML configuration.  It took me almost an hour to get simple file copy configured and even then I couldn't get it to synchronize the files (copy only if newer), I could have written a plugin to do that but it's a lot more time needed to mess with build environment than I wanted to spend.

Many of the things that I needed to do, such as dbupdate, creating custom CATALINA_BASE, validating the environment turned into calling ant tasks to preserve sanity and the XML config just grew in complexity.  I started to understand why some maven POM files are borderline unreadable.

But I really liked the simplicity of how maven fetched dependencies and built the WAR and how well it worked with IDEA.  At this point I decided to write an ant script that would call maven to do builds and dependency updates and use ant/python for everything else environment related.  But at this point I was managing 2 different build systems.  So I figured I try something that may simplify my life a bit.


Gradle

Gradle is near the middle of the build rigidity scale, leaning slightly towards Maven.

According to their site, they are the "next generation" of build systems, so there must be something to this claim.  I started a new project with same requirements and was able to get dependencies configured and using WAR plugin get the build done.  Seemed reasonable, but Groovy syntax did not sit well with me, there were way too many ways to do the same thing, which made reading script samples from other people challenging.  Not a show stopper.

Copying files was also not difficult using the copy task, and once you define dependencies you can iterate through them and copy when you need.

Next I tried to incorporate dbupdate (or similar) and ran into a problem where the only solution was to define an ant task and run it through there, there are samples on stackoverflow.com so I was able to get that running (here we have Gradle calling Ant, I am back to supporting 2 build systems).  Now the task of resetting the database is 3 part: drop database, create database, run SQL scripts... in that order.  Gradle has a last minute addition to executing sequential tasks using .mustRunAfter and .shouldRunAfter which makes configuring dependent tasks overly complicated.  One of the main features of a build system should be doing things in a predictable order and Gradle way is very fragile:
task resetDatabase(dependsOn: [dropDatabase,createDatabase,createChangelogTable,updateDatabase])
createDatabase.mustRunAfter dropDatabase
createChangelogTable.mustRunAfter createDatabase
updateDatabase.mustRunAfter createChangelogTable

Another issue came up when I imported the gradle project into IDEA.  Since this is a web app, I need to mark some jars as provided (servler-api, jsp-api, etc) but there is no such thing in Gradle, runtime is as far as I could go but that's not the same as provided (it means that the jars need to be included at runtime but not for compiling which is not the same as available at runtime, there is a servlet spec that states that you can only have 1 copy of servlet-api.jar or tomcat will complain on startup, this is to make sure your servlet API is consistent with server).  There is a custom Gradle plugin for doing provided, but it did not work with IDEA.  Every time I changed dependencies, it would remove the tomcat libraries from the project dependency and code would break.  To fix it I would have to open Project Settings and manually add dependency on Application Server libraries which made them marked as provided.  Got annoying very quickly as this happened every time anything changed with dependencies.

Another minor issue was with IDEA Gradle plugin and Web facet, it created one entry for every directory in WEB-INF which made for a very complicated configuration when it was really WEB-INF to context root.

I do think that many of the problems with Gradle were caused by IDEA integration but since that is my environment it affected my decision.

Ant

Ant is on the opposite side of Maven on the build rigidity scale, it's very flexible and also not very tied into the build process.  Which means I have to define a bit more up front.

Once you add the ant script to IDEA it just runs it, there is no dependency management by default (but there is an Ivy plugin which I did not try yet), but Apache Ivy uses maven repositories and is very easy to integrate into Ant.  Pulling down dependencies and copying them into your project is an easy ant task (not an implicit action as it is with Maven or Gradle).  Ant is explicit about everything and easy to set up sequential dependencies.  The only obvious issue was that I had to add WEB-INF/lib to .gitignore since Ivy would pull all the jars into the source tree.

There are ant tasks for almost everything since it has been around for a long time.  dbupdate was a simple configuration. creating custom CATALINA_BASE was also a simple task.  Checking the environment is where it gets weird with ant, since you have to define tasks for setting properties to check anything and then other tasks that check the result of those properties.  So after a little while ant tasks get difficult to read.

I also had to write a few scripts in python to do environment validation.

Summary

Maven Pro

  • Transparent once everything is where Maven expects it
  • Nice integration with IDEA
  • Arch-type projects help you get started quickly

Maven Con

  • Very rigid
  • POM configuration files are very hard to read
  • POM files get very complicated very quickly
  • Custom operations are time consuming and not easy to do



Gradle Pro

  • Nice dependency management
  • Most things can be done programatically
  • Supports WAR and JAR building via plugins

Gradle Con

  • IDEA plugin is quirky
  • No support for provided libraries
  • Sequential execution of tasks is tedious to configure



Ant Pro

  • Simple to work with and many tasks are already available built-in
  • Ivy dependency management is not intrusive
  • ant-contrib adds a lot of improvements


Ant Con

  • WAR and JAR building requires specifying everything manually


Conclusion

Maven was too much configuration work and I did not want to write plugins.  It's a nice build system if you don't mind the very rigid approach, but doing anything unplanned becomes a major undertaking.

Gradle lacks some very important features to me and IDEA integration is quirky (having to re-add application server library whenever dependencies changed was very annoying, messy Web Facet import, etc).  This may eventually be resolved but at the moment it feels hacky and requires you to be aware of the shortcomings and compensate for them.

I chose to go with Ant and Ivy for simplicity, support and maturity.  It provides me with fine grained control over the build process and stays out of my way while I develop; while initial configuration can be a bit time consuming, over time it averages out to be less maintenance.  The other two were too intrusive and dependency management was something that got more in the way than I wanted.  I want my build system to stay out of my way when developing, as it constitutes a very small amount of time-in-use overall.  If I have to be aware of it and work around (or within) it, then it means I am not as free or productive as I want to be.


Tuesday, August 07, 2012

Profiling tomcat using jvisualvm

 

Enabling Eclipse to use JMX listener on tomcat server

  1. Using the Servers view double click on your server
  2. Click on "Open Launch Configuration"
  3. Go to "Arguments" tab
  4. In "VM arguments" text box (one on the bottom) add following:
    -Dcom.sun.management.jmxremote=true
    -Dcom.sun.management.jmxremote.port=9090
    -Dcom.sun.management.jmxremote.ssl=false
    -Dcom.sun.management.jmxremote.authenticate=false
  5. Launch jvisualvm (in the bin directory of your JDK, version 1.5 or newer)
  6. Under Applications window you should see "local" and "Tomcat", right and "Open"; this will attach it

Using JVisualVM to profile a URL

After attaching to Tomcat process (above) you will have a Tomcat tab with 5 sub-tabs:
Overview - of the attached process
Monitor - of memory usage, threads active, loaded class count (here you can force garbage collector to clean up and generate heap dumps which can be used to do detailed analysys using eclipse with MAT plugin (download by usual Install New Software... in eclipse).
Threads - detailed thread information
Sampler - CPU or Memory can be sampled (method and CPU time logged)
Profiler - CPU or Memory can be profiled (time and invocation count logged)

Monday, July 18, 2011

Quoth The Donald... nevermore.

The Quote

In a somewhat famous statement, Donald Knuth (in a quote he later tried to pawn off on Tony Hoare) allegedly said: "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil".
In many years of software development, many people have justified their poorly written code with: "premature optimization" is bad and should never be done because Donald Knuth said so. Very often this was a response when their product fell over under a mild load.

As someone who is involved in code optimization (and have been doing it for over a decade), I can tell you what Donald meant was: to not waste time on making well written code optimally fast and concentrate getting it working well and deal with unexpected inefficiencies later. Profiling and tuning well written code is relatively easy, poorly written code presents a time consuming problem as basic need to be fixed first.

He did not mean that you can just write poorly optimized code and try to justify your actions with a misunderstood quote. Maybe Donald realized the can of worms he had created and tried to distance himself from it; we will never know.


How it all begins

No one writes inefficient code willingly or knowingly. I have never met a developer who had bad intentions. What I have met are developers who have convinced themselves that their coding skills are amazing and they write amazing code. That they were a lot better than they really were.

Case 1

I worked with a developer once who told me that he "was something of a big deal around here", this is before his code was unable to scale past 5 concurrent users and eventually required a complete rewrite (but not before his options vested, he made a lot of money and left to start another company; never learning how average of a developer he was). He was not a bad guy, an average developer at best, but he had convinced himself that he was way above average; a legend in his own mind. Everyone praise him (mostly non technical people). Since he was an average developer who thought he was amazing, he hired people for his group that he thought were average (when in fact they were barely able to write code). The codebase reflected it. When I looked through the code, I saw many basic problems (CS101 level issues). Recursion without stopping conditions through all code paths, this caused an occasional race condition on the browser. Many cases of incorrect collections (like lists of name/value pairs doing O(N) lookup when a map/tree with O(N log N) would have been far more efficient). Places where he used loops inside loops which resulted in exponential slowdowns (single loops were needed). When I asked him about his many basic problems? Quoth The Donald... nevermore.

Case 2

Another lead developer I worked with was constantly told, by people who did not know how to write code, of how he was a superior developer and he had the mentality that he could do no wrong. He spent his development time trying out new technologies and then applying them to the current product mostly unsure how it would work but that it was somehow cool. This was a Frankenstein's monster of software. It had 2 pass pre-build (making it insanely difficult to debug), 2 view rendering engines, 3 client presentation layer libraries, 2 database interface libraries neither worked well enough so there was a lot of SQL statements as well, database schema so poorly designed that an average query required 2 inner joins and 2 outer joins (many were so poor that response times for a web page under 1 user load would easily go over 30 seconds); it was a mess. Load tests with more than 5 concurrent users often never completed and scaling was pretty much buying lots of hardware and hoping it was enough. When asked why not stick with one technology for each area, he said that he was looking for the right one but hasn't had time to go back and unify them. He saw no problems with 20-30 second reply times and claimed it just needed some tweaking. And of course he did Quoth The Donald... nevermore.


What The World Needs Now

There are many more cases but you get the idea. Premature optimization is not writing efficient code up front (that is desirable), it is writing code that is specifically optimized without knowing the big picture. Premature optimization is not an excuse to write poor code or build on a poor design. It means: Don't spend too much time on minor optimizations which yield negligible results.

Efficient well written code at early stages of development can be a difference between a successful project and a long, protracted code cleanup.

Poor Donald... we hardly knew you.

Monday, January 24, 2011

Running the code through PVS-Studio

I was generously offered a temporary license to PVS-Studio to do static analysis on my code and the following is the result.


The Hardware


OS: Windows XP Professional x64 Edition (5.2, Build 3790) Service Pack 2
Processor: AMD64 (8 CPUs), ~2.7GHz
Memory: 4094MB RAM

Performance was not much of an issue and analysis took a few minutes per project, but I figured I should port the hardware used for reference.


The Install

Install was painless and it integrated itself into Visual Studio 2010 as a menu item. You can analyze a file, a project or whole solution. You can also load/save analysis.


The Analysis


I analyzed most of the base libraries used by AObjectServer, then base modules and then the actual server code. I also analyzed the load test tool I use (which comes as a command like tool and a Win32 GUI). Overall it was 11 DLLs, 3 EXE. Analysis per project is about 2-5 minutes (depending on files I suppose).


The Results

Results are divided into 3 levels.

Start with Level 1 issues.

MESSAGE: V550 An odd precise comparison: - 1.0 == m_stateTimeLimit. It's probably better to use a comparison with defined precision: fabs(A - B) '<' Epsilon.
CODE:
#define INVALID_TIME_INTERVAL -0.1
return (INVALID_TIME_INTERVAL == m_stateTimeLimit || m_stateTimer.getInterval() < m_stateTimeLimit ? false : true);

FIX: return (0.0 > m_stateTimeLimit || m_stateTimer.getInterval() < m_stateTimeLimit ? false : true);
NOTE: This is definitely a good find


MESSAGE: V550 An odd precise comparison: 0.0 == difftime (t). It's probably better to use a comparison with defined precision: fabs(A - B) '<' Epsilon.
CODE: return (0.0 == difftime(t) ? true : false);
FIX: According to the API reference return of 0 from difftime
NOTE: This is a tougher one, this tests to see if difftime returns 0 which means that it is the same time. I did notice that this function was operating on time_t structure which defined on Windows as __int64 so comparing to another time_t should be fine. I made this change, but I am not sure if other operating systems may use something else for time_t.

-----------

Next are Level 2 issues. Almost all were V112 type that flags usage of numbers that relate to memory sizes (like 4, 8 32, etc); all of these were false positives for me. Few of the messages were interesting.

MESSAGE: V401 The structure's size can be decreased via changing the fields' order. The size can be reduced from 32 to 24 bytes.
CODE: in dbghelp.h (part of Microsft SDK)
typedef struct _MINIDUMP_EXCEPTION_INFORMATION64 {
DWORD ThreadId;
ULONG64 ExceptionRecord;
ULONG64 ContextRecord;
BOOL ClientPointers;
} MINIDUMP_EXCEPTION_INFORMATION64, *PMINIDUMP_EXCEPTION_INFORMATION64;
FIX: None, it's part of the Microsoft SDK but interesting.

MESSAGE: 2776 V801 Decreased performance. It is better to redefine the first function argument as a reference. Consider replacing 'const .. path' with 'const .. &path'.
FIX: I accidentally used pass object instead of pass by reference, while not critical in this instance since it was part of initialization call, this message is very useful.

MESSAGE: 297 V112 Dangerous magic number 4 used
NOTE: The tool is fixated on numbers that may be associated with memory word sizes like 4, in all my cases 4 represents a 4 and not a memory size (I use sizeof if I need word sizes). So this resulted in many false positived and probably should have been level 3.

-----------

Now on to Level 3 here are the more interesting ones.

MESSAGE: V547 Expression 'findFromFront (p) >= 0' is always true. Unsigned type value is always >= 0.
CODE: AASSERT(this, findFromFront(p) >= 0);
FIX: This was not needed since size_t is the result from findFromFront and it is always >=0, this I suspect is leftover code from changeover from int to size_t.
NOTE: Good to clean dead code even if it is harmless

MESSAGE: V547 Expression 'moveSize < 0' is always false. Unsigned type value is never < 0.
NOTE: These were very useful since they are a result of another porting project that replaced int with size_t and went from signed to unsigned. Most of the warning were harmless and inside assert statements but nevertheless, cleaner code is always better.

MESSAGE: V509 The 'throw' operator inside the destructor should be placed within the try..catch block. Raising exception inside the destructor is illegal.
NOTE: Great catch, this was a result of a conversion from assert() to a macro which throws an execption and this was inside a destructor which is a bid thing.


The Summary

Overall I liked the tool, it did find a few non-critical issues with the server code, but to be fair I have ran most of my code through FlexeLint and BoundChecker (until my license expired that is). I also have Visual Studio warning level 4 turned on for debug builds and that catches a lot of issues.

The main benefit of PVS-Studio is that it is good at finding issues that may affect porting 32-bit code to 64-bits.

Thursday, November 18, 2010

Faster, lighter and elsewhere

I have been slowly making progress on AOS, mostly in performance and stability area, so not much to update. I run AOS as my personal web/app server and it has been putting up amazing uptimes. It starts with the OS, leaks no memory and restarts with OS, no incidents of watchdog restarts at all. This is a good thing, given that a large part of the requests are mangled headers, zombie scans, search engine robot scan and other non-website related traffic.

While the product is very niche to websites that require the fastest execution of code or native integration with existing C/C++ code, it still surprises me when I get a comment/feedback email from a place where I did not think this would be used.

The only thing I have changed in the last few months was to upgrade the solution/project files to Microsoft Visual Studio 2010, those will be part of the next release or if you need them sooner, email me and I'll put them up.

Sunday, May 24, 2009

Eclipse and Perforce do not mix

Spent last 2 weeks fighting with the perforce eclipse plug-in (war is not over, I even submitted an enhancement request to support .p4ignore). I wish I knew how poor perforce works with eclipse, I would have fought again it. As a source control it works fine as long as you use their GUI tool, but subversion it is not.

I have used a lot of source control software in my life: CVS, subversion, ClearCase, SourceSafe, Mercurial, Git, perforce and few others for short time.

My favorite has to be subversion. In 7 years of using subversion for my personal development, I have never had any problems. It works from command line, with windows explorer, with eclipse, with visual studio... it just works as you expect it, no fuss, no config woes. Just point it to a URL and you are done.

I want to preface this by saying that eclipse is an excellent IDE, the eclipse plug-in which is not associated with eclipse at all, and is written by perforce team.

Some things with perforce that I find unacceptable for commercial software:
- No ignore file (this is the first source control software that I found that doesn't have ignore), there are times you need to mark a local file so that it is not added to source control (mostly config files that have been customized for the local environment). Eclipse plug-in allows you to create .p4ignore but their GUI tool does not support it.
- Eclipse integration is shameful. There seems to be non-transactional nature to the plug-in, so when a problem is encountered you never know what state the file is in and even worse some problems fail silently. Login issue is the main cause, after the login expires eclipse plug-in starts silently failing unless you restart it; so now I close my eclipse before I go home and open it again when I get into work (annoying).
- Weird states created by eclipse plug-in, if the file is in read-write state due to an error, you are now in an odd state when you can't merge with latest, can't checkout and can only revert to the latest (while backing up your file) then using BeyondCompare (which is an awesome tool) to merge your changes in.
- Use of RO flag for management of check-ins, it feels like its 1995 and SourceSafe again, this flag can get clobbered by some editors or eclipse plug-in when checkout fails.

UPDATE: It has been a year and Perforce managed to fix many of the quirks and works a lot better with Eclipse now, in November of 2010.

Wednesday, March 11, 2009

Version 1.7.0.0 ready for prime time!

Lots of evening and weekends spent on getting more stuff in. Most of the changes are internal but I did add a nice visual admin site viewer using jQuery plugins.

After battling windows service code I finally got it working well enough that I feel comfortable using it instead of the console mode. It did take a while to figure out all the little kinks that arise when writing services and having to deal with Microsoft's security system (it's like trying to play Spelunker, read the Gameplay section).

I also decided to organize the entire website into the _website folder which makes getting up and running trivial (not that it was hard before, just now everything is under one folder and you can switch website roots easily which not having to copy config or resources). Something that people that run multiple instances have requested.