Many web view technologies are very difficult to unit test. This includes JSP/JSTL, PHP views, Rails views, and probably ASP.NET views. The problem is the lack of the ability to set an arbitrary model to a view and doing a string assertion on the view. (Correct me of I'm wrong.)
I did a bit of research on templating engines for Java. Here they are:
Freemarker - Not specific to the web, unit testable without servlet container, replacement of JSP. Last update in December 2008.
Velocity - Greater community than Freemarker (according to Freemarker themselves). Also not web specific and a replacement for JSP. Last update in May 2009.
String Template - Very minimalistic approach to templating with ports to Python and C#. Also not specific to the web. Last update June 2008.
All of these technologies basically solve the problem of making the view accessible without a servlet container. This makes it easier to pass the responsibility of HTML/CSS design and JavaScript coding off to other developers. Spring supports Velocity and Freemarker though it shouldn't be too difficult to write an adapter for String Template. Right now, I'm leaning towards Velocity because of the better community support.
In addition to templating engines, I also looked at page composition frameworks:
Tiles - Originally part of Struts. Employs the composite pattern where each page explicitly define header, footers, and etc. to include. Support for Freemarker and Velocity templates. Last release in February 2009.
SiteMesh - Alternative to Tiles. Employs the decorator pattern where pages are not aware of the header and footer. Can work with other web technologies (CGI, PHP, and etc.) -- this is COOL. Last update in March 2009.
I am really excited about using SiteMesh. Here's a SiteMesh tutorial.
Friday, July 24, 2009
Sunday, July 19, 2009
Database anti-patterns that all developers should know
Josh Berkus, one of the core developers of PostgreSQL, gives a tongue-in-check talk on 10 ways to wreck your database.
http://www.oreillynet.com/pub/e/1371
The talk is about 40 minutes long -- a decent enough talk. However, the Q&A portion that followed the talk is stellar. Unfortunately, you can't really just skip to the Q&A portion and you definitely don't want to stop watching after the talk because you would get the wrong impression.
The most important point for me was that I need to start using natural keys in my database and normalized through natural keys rather than primary keys.
http://www.oreillynet.com/pub/e/1371
The talk is about 40 minutes long -- a decent enough talk. However, the Q&A portion that followed the talk is stellar. Unfortunately, you can't really just skip to the Q&A portion and you definitely don't want to stop watching after the talk because you would get the wrong impression.
The most important point for me was that I need to start using natural keys in my database and normalized through natural keys rather than primary keys.
Tuesday, July 14, 2009
Why and how of test first development
Test-first development is the process of writing your test code before your implementation code. I have always felt this was the ideal approach but NOT necessarily the pragmatic approach to development. The thought of writing a full test before writing the code was essentially writing the specifications first and feels almost like a micro-waterfall approach to development.
I attended a couple of talks recently that made me realize that test-first is not necessarily about writing the entire test case before the implementation. Rather, you can take a very iterative approach. The idea is that you write just enough of the test code to get your implementation code to fail followed by just enough implementation code to get it to pass. Then you write just enough test code to get your implementation code to fail again and repeat the process until your test and implementation code is complete.
To be more specific, the first line of test code could simply be calling a method that does not exist in your implementation. This will cause a compile error (for those of us on stronger typed languages). Then you write just enough code to get the compile error to go away. Some of you may be thinking that compiling is not unit testing which is true but it is a form of testing nevertheless.
The second line of test code could simply be an assertion that the returned object is not null. This should result in a test failure. To pass this test, you can simply return a dummy object. Now obviously your implementation code should not be returning dummy objects. So, the third line of test code could compared the return value with an expected value which should cause the test to fail since it was initially returning a dummy value. Then at this point you would modify the implementation method to return the appropriate value dummy object.
Note that this approach breaks up your test into finer grain assertions. Had you write the test first, you would probably have never thought about performing an assertNull which would aid in debugging. In more complex methods, there would probably be many forgotten assertions. This finer grain approach also makes the test easier to write since we are taking a small bite at a time.
I have been using this approach for the last few months and this has actually become my preferred means of development. In fact, I now get that dirty feeling when I write tests after implementation, much in the same way a test-infected person feels dirty pushing code without unit tests.
I attended a couple of talks recently that made me realize that test-first is not necessarily about writing the entire test case before the implementation. Rather, you can take a very iterative approach. The idea is that you write just enough of the test code to get your implementation code to fail followed by just enough implementation code to get it to pass. Then you write just enough test code to get your implementation code to fail again and repeat the process until your test and implementation code is complete.
To be more specific, the first line of test code could simply be calling a method that does not exist in your implementation. This will cause a compile error (for those of us on stronger typed languages). Then you write just enough code to get the compile error to go away. Some of you may be thinking that compiling is not unit testing which is true but it is a form of testing nevertheless.
The second line of test code could simply be an assertion that the returned object is not null. This should result in a test failure. To pass this test, you can simply return a dummy object. Now obviously your implementation code should not be returning dummy objects. So, the third line of test code could compared the return value with an expected value which should cause the test to fail since it was initially returning a dummy value. Then at this point you would modify the implementation method to return the appropriate value dummy object.
Note that this approach breaks up your test into finer grain assertions. Had you write the test first, you would probably have never thought about performing an assertNull which would aid in debugging. In more complex methods, there would probably be many forgotten assertions. This finer grain approach also makes the test easier to write since we are taking a small bite at a time.
I have been using this approach for the last few months and this has actually become my preferred means of development. In fact, I now get that dirty feeling when I write tests after implementation, much in the same way a test-infected person feels dirty pushing code without unit tests.
Saturday, July 11, 2009
Rolling your own security authentication/authorization?
Before you do, watch this 50 minute video on Spring Security:
http://www.viddler.com/explore/oredev/videos/22
Make sure your implementation of security rolling matches the ease of implementation and features of Spring Security. If not, use Spring Security.
http://www.viddler.com/explore/oredev/videos/22
Make sure your implementation of security rolling matches the ease of implementation and features of Spring Security. If not, use Spring Security.
Saturday, July 4, 2009
Database Connection Pooling: c3p0 versus dbcp
I was looking at database connection pooling the other day and was trying to decide between c3p0 and Apache Common's dbcp. Both are open source (of course) and implement the standard DataSource interface, which means that you can pretty much swap out one for the other without breaking functionality. Both are not very up-to-date with their last stable releases being over two years ago: The last stable release of dbcp (1.2.2) was 2007-04-04 while for c3p0 (0.9.1.2) was 2007-5-21. The consensus appears to say that dbcp is better for single threaded applications while c3p0 is better on multi-threaded applications (which would include your webapp). Based on this alone, I favor c3p0.
Reference:
Spring Forum posting
Javatech comparison of c3p0 versus dbcp
Stack Overflow discussion
Reference:
Spring Forum posting
Javatech comparison of c3p0 versus dbcp
Stack Overflow discussion
Wednesday, July 1, 2009
Spring versus Hibernate Validator
I looked at both Hibernate and Spring Validation briefly today.
Hibernate is annotation-based and works something like this: (1) You annotate with the proper validation rule (e.g., @NotNull) on the fields of your bean; (2) You call Hibernate's ClassValidator to validate, which returns the validation errors. Presumably, it inspects the annotations on the bean fields and performs the appropriate validation.
Pros: Easy declaration based validation.
Cons: I haven't figure out a way to perform database check for uniqueness.
Spring's validation is programatically-based. You implement the Validator interface which contains two methods, one of which is the validate method. You perform your validation manually, either writing the rules yourself or using Spring's utility methods, e.g., ValidationUtils.rejectIfEmpty(...)
Pros: Very flexible which allows for database record-based validation
Cons: More complex than Hibernate's Validator.
Neither solution is particularly ideal. Hibernate seems too simplistic while Spring's requires too much programming. I lean towards Spring's Validator mainly because it's very apparent as to how one would perform database record validation.
Reference:
Tutorial: Getting Started with Hibernate Validator
[Chapter] 6.2 Validation using Spring's Validator interface
Hibernate is annotation-based and works something like this: (1) You annotate with the proper validation rule (e.g., @NotNull) on the fields of your bean; (2) You call Hibernate's ClassValidator to validate, which returns the validation errors. Presumably, it inspects the annotations on the bean fields and performs the appropriate validation.
Pros: Easy declaration based validation.
Cons: I haven't figure out a way to perform database check for uniqueness.
Spring's validation is programatically-based. You implement the Validator interface which contains two methods, one of which is the validate method. You perform your validation manually, either writing the rules yourself or using Spring's utility methods, e.g., ValidationUtils.rejectIfEmpty(...)
Pros: Very flexible which allows for database record-based validation
Cons: More complex than Hibernate's Validator.
Neither solution is particularly ideal. Hibernate seems too simplistic while Spring's requires too much programming. I lean towards Spring's Validator mainly because it's very apparent as to how one would perform database record validation.
Reference:
Tutorial: Getting Started with Hibernate Validator
[Chapter] 6.2 Validation using Spring's Validator interface
Subscribe to:
Posts (Atom)