18 October 2009

Fresh Perspective: Specifications

!!!!!!!!!!!!!!!!!!!!!! מזל טוב לתאיר לרגל האירוסין שלה !!!!!!!!!!!!!!!!!!!!!!

Last time I wrote about documentation, I was huffy and annoyed.

Not today.

After posting a link to Joel on Software, I followed my own advice and started reading everything that caught my eye there. And there was a lot that caught my eye.

Notable for today's post are Joel's preachings on functional specifications. Reading them made me think again about what a good spec can do for me, the developer. And my teammates.

The upshot of this is that I am (voluntarily) writing a quick spec of what I expect from a particular functional module in the project. Our original monster document never clarified how the end user will accomplish his goal via interactions with the system. But before we can come out with our long-awaited initial version, we need that clarification. Everyone seems to have a different idea of how the interaction should go, but I think that a good spec would enable us to discuss our ideas in a clear format.

29 September 2009

I can tell she is a born.... HUNGARIAN! (bravo, bravo, bravo!)

The title is for all you fans of My Fair Lady.

On to the actual topic: variable naming conventions.
http://www.joelonsoftware.com/articles/Wrong.html

Oh, wow. I really need to start following that blog!

(yes, I got totally distracted for about an hour. Check out this:
http://www.joelonsoftware.com/articles/fog0000000332.html)

(OK, make that 3 hours, and just subscribe to the above-mentioned blog. I did.)

Back to my original point: variable-naming.

When you get someone else's form, which you are meant to add code to, how do you know what (s)he named the controls? Let's say there are five textboxes, three radio buttons (in a single list), and two comboboxes (or drop-down lists, as you please).

Name ____________
Age ___
Weight ___ in O lbs O kilo O stone
Home town _____________
        State __|\/| (that's a combobox, OK?)
Favorite football team ___________|\/|
Favorite food ____________

Now then. I need to pull some data from the form (whether to validate it or for my own nefarious reasons). How do I know what the "Home town" textbox is called? What might it be?

I could go into the form and select the textbox, then look at the Properties box.

That means waiting:
1. wait for form designer to load (seconds)
2. wait for Properties box to load with my selected control (at least one more second)

Or I could make use of Intellisense by entering the first few characters of the control's name, with it prompting me with possible completions. This takes far less than a second... if the control's name is easily guessable.

If the textboxes have names like these:

UserName
Age
Weight
Units (for the list of options)

then Intellisense can only help me if I have a good guess of what my coworker had in mind. So is the control called City? Town? HomeTown? What about the combobox for the state? HomeState?

This is where Hungarian comes in. If you read the wiki on Hungarian notation, you'll find that this particular method of naming your variables has its detractors. Especially in the form I'm suggesting. More on that momentarily.

Hungarian notation means adding a prefix or suffix to your variable names (here, control names) indicating their type. Intellisense works with the first characters you type in, so I'm pushing for prefixes. Here are some Hungarian-named controls:

txtName
txtAge
txtWeight
radUnits

What does that do for me? If my team has been consistent about this, I don't care whose page I work on - I can get Intellisense to show me a list of all the controls of the desired type in three keystrokes (the first three characters of the variable name - that is, the prefix). Then I can choose from the descriptive remainder of the controls' names to get the actual value I want.

This form of Hungarian notation has been deprecated for being redundant - it doesn't add any information because the IDE already keeps track of variable types, and when you're writing a function the variables you want are either passed in as arguments or declared locally (as close to first use as possible, please, and no ten-page functions, and no global variables).

When we're dealing with a form on a designer, however, the story changes. Controls are not declared anywhere that is visible from the page of code you're working on; to view the original declaration, as mentioned above, takes more time than looking up or scrolling up. When you don't know the name of the control you want, only its type and the general meaning of its content or use, prefix Hungarian enables the use of Intellisense and makes life simpler and easier.

22 September 2009

Null Coalescing

I had seen this before, but now it hit me:

?? is an operator in .NET 2.0 and up.

http://weblogs.asp.net/scottgu/archive/2007/09/20/the-new-c-null-coalescing-operator-and-using-it-with-linq.aspx

So all those lines of code that look like this:

int myInt = (myNullableInt.HasValue) ? myNullableInt.Value : 0;

can now look like this:

int myInt = myNullableInt ?? 0;

Hurray!

16 September 2009

LINQ to SQL, LINQ to Entities

Just read a fascinating thread on the MSDN forums:

Very inefficient SQL generation in EF?


(although one of the posters there complained about how the link to this thread is not permanent...)

In the new world of .NET data access, it is no longer acceptable to deal with DataTables in the Business Layer of your application. DataTable is meant to bridge between the world of datastore (read: database) and business entities. In former days, Visual Studio's offering in the realm of closing the gap between data records and business entities was the strongly-typed DataSet (ADO.NET, if you please). When you added an SQL Server database to your project (this is still true of VS 2008), the IDE immediately started a wizard to generate a .xsd file defining classes derived from DataTable (and its relatives, DataRow and so on) that would be strongly-typed to match your database's schema. In essence, this wizard was about halfway to mapping your data from the hierarchal-relational model to the object model. It was one step short of having your records translated to full-fledged objects.

In order to achieve the full translation, a new mapper was needed: one that would translate your hierarchal-relational model to business entities. The disadvantage of such a mapper would be the way it limits querying: if the object you are talking to is not a table or view, you can't query it.

With the advent of LINQ, the story changed. Microsoft developed LINQ to SQL to fill the gap between table and business entity, and with the capabilities of LINQ, you can write code to query collections of objects. It's nearly as easy to write code in LINQ as it is to query tables of records in regular SQL. (Sometimes it's actually quicker to code in LINQ, but complex hacker-style queries are still best in SQL.)

The people who make Visual Studio actually published TWO object-relational mappers with VS2008: LINQ to SQL and LINQ to Entities. LINQ to SQL is heavily tied to SQL Server; LINQ to Entities is part of the Entity Framework, which is supposed to support many data providers.

It seems that the Entity Framework is still in the experimental stage, not yet market ready. The people on the forum linked above were unimpressed with the way LINQ to Entities translates your query into SQL. LINQ to SQL, on the other hand, rates pretty well on the performance end with payoffs in flexibility.

The bad news is that LINQ to SQL is no longer under development; MS sees the Entity Framework as its next-generation ADO.NET. The good news is that the LINQ to SQL team has been integrated into the Entity Framework team, so there's hope that at least the SQL Server implementation of LINQ to Entities will be as good as LINQ to SQL.

This being the case, LINQ to SQL will stick around a while longer, at least until the Entity Framework team manages to port the good things (read: provider-specific optimizations) from LINQ to SQL into their SQL Server provider.

15 September 2009

Version Control for the Server-Challenged

Some background: there are three of us working on a large project. We are not located in the same city; two of us are about an hour's drive away, given traffic, and the third is in a totally different region of the country. Each of us is developing web pages on her own machine at home, sometimes coming in to school and developing on her own (desktop) machine there. That means that at least six versions of the code exist (one for each location). Usually, we work on discrete sets of pages... but not always. For example, if I am working on localization and Liora is working on authentication, we each need to make small changes to ALL of the pages in the set.

The solution to our problem should be a version control server, like Subversion. If we were all using Visual Studio proper, we could even use an add-on called Ankh to integrate Subversion with the IDE. (I personally am using Visual Studio Express at home, so that won't work, anyway.)

The issue with this solution lies in where to store the repository - the central "original" from which all copies are checked out and to which all updates are committed. Subversion (and its client applications like TortoiseSVN and AnkhSVN) offers five options:
  1. local file
  2. http (on an Apache server)
  3. https (same as above, but secure)
  4. svn (on an SVN-enabled Linux server)
  5. ssh+svn (same as above, but secure)
Students are notoriously short on cash. The obvious result? We can't afford our own web server for this project. While we each have space on our school's web server (some of us even know how to access that space), we don't make the decisions of what software that server will run. I tested to see if our server is already running Subversion or CVS, its predecessor. It is running neither.

This being the case, how will we manage version control?

The only option left open to us is 1, the local file. Local where? Well, since this is my idea and I'm working on my home machine, local here. And then everyone else will need to send me their versions for periodic commits and updates. This is not optimal, because it turns the engineering problem (where the tools would force the appropriate action to ensure correct results) to a personnel problem (where with every additional worker, the chances of something grow wrong increase). Whip cracking might be in it.

Life's not fair. But we can get by.

09 September 2009

More Asynchronous Stuff

We've divided up the project, and I have selected a new (and exciting?) role for myself: the UI developer. Web UI, that is.

Taking a look around at popular sites, I can tell you already that I need to learn all about making good use of AJAX. Let me attempt to define it, based of course on an acquaintance of less that three weeks.

Traditional ASP and ASP.NET sites are heavy on the server-side processing. For any event you want to handle, the page needs to post back to the server and be loaded again. If your page is large, that's a lot of work. There are ways to pare down on page size (by making only the necessary controls ViewStateEnabled, for one thing), but the bottom line is a page reload for every click of a button. This means slow loading, flickering, and a choppy user experience.

Popular sites are developed by people who know what their audience wants. And while I am not yet a developer of a popular site, I can tell you right now what my audience wants: a site that loads quickly and that doesn't flicker. (Or hang up. Or crash their browser. More on those in a minute.)

Enter AJAX, a technology that was available as an add-in in Visual Studio 2005 and is a charter member of Visual Studio 2008. With AJAX, you can integrate Javascript asynchronous event handling into your page. The Javascript handlers asynchronously call server-side functions (exposed via .asmx web service or WCF - and one other method, for another post). The user continues interacting with a living, breathing page while the Javascript's spare thread waits for a response from the server; when the response arrives, the main thread updates the page accordingly and all is well. Mostly.

Note that I said the handlers asynchronously call server-side functions. The call is asynchronous; the server-side functions are not necessarily. What does this mean to my end-user? What does this mean to me?

The trouble comes in when the server response takes a long time. If many requests come in, and enough of them are taking a long time, then my server will run out of threads and my site will hang up. (Remember what I said? My users won't want that.)

The solution, my friends, is easier said than done: implement asynchronous calls where appropriate on the server side. For example, if my service function is doing I/O with the data store, the call to the DAL should be asynchronous. That way, the thread working on that service call can sleep instead of "busy waiting" and free itself up for another service call (maybe one that doesn't need any heavy I/O).

Sounds great in theory. What about in practice?

I think I'll go practice...

03 September 2009

WCF Timeouts

Did you ever search all over the web and only find the same wrong answer, over and over? I recently had this experience.

In an unrelated project, I was trying to set up a notification system using WCF. At some unspecified point in time, the web service would receive a notification for the client and would need a way to propagate that notification. Our thought had been to implement this with asynchronous calls to the service.

When using Visual Studio's very handy WCF creation environment, immediately after building your WCF app you can have another project in the same solution discover it. The client project can then choose to implement the WCF service function calls "asynchronously" (this is a checkbox in the configuration dialog for adding/changing a service reference).

Now, let's get this straight once and for all: the call to the service is NOT asynchronous. When you add in the asynchronous operations, what ACTUALLY happens is that the generated client class creates a new thread to handle the call, then has that thread notify your main thread when the call returns, so that the function call is asynchronous to your client program. But the actual call to the server is synchronous and will time out in a minute. Let me stress, in ONE minute.

The default operation timeout on a WCF client channel is 00:01:00. And contrary to popular misinformation, the settings in your .config file have nothing to do with this. The error message itself includes this line:

Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property)

That being the case, your "asynchronous" call has exactly a minute to be notified and return with the goods, or you are catching an exception. How to increase the operation timeout? By casting the channel to IContextChannel.

I admit that though my reading comprehension skills are at least average, I missed the tip in the error message and wasted hours playing with .config files, just as the forums suggested. Until I found this:

http://www.codeproject.com/KB/WCF/WCF_Operation_Timeout_.aspx

Yes, the error message had it right.

Since we're working with the autogenerated client class, it would be unwise to go into the (hidden) Reference.cs file that contains the ServiceClientFoo class. Note, however, that ServiceClientFoo is a partial class. So we can make our own file, call it ServiceClientFoo.cs, and open the class from there:


    public partial class ServiceFooClient : System.ServiceModel.ClientBase<CurrNamespace.ServiceFooRef.IServiceFoo>, CurrNamespace.ServiceFooRef.IServiceFoo
    {
        public IServiceFoo SetOpTimeout(TimeSpan timeout)
        {
            ((IContextChannel)base.Channel).OperationTimeout = timeout;
        }
    }