18 October 2009

Fresh Perspective: Specifications

!!!!!!!!!!!!!!!!!!!!!! מזל טוב לתאיר לרגל האירוסין שלה !!!!!!!!!!!!!!!!!!!!!!

Last time I wrote about documentation, I was huffy and annoyed.

Not today.

After posting a link to Joel on Software, I followed my own advice and started reading everything that caught my eye there. And there was a lot that caught my eye.

Notable for today's post are Joel's preachings on functional specifications. Reading them made me think again about what a good spec can do for me, the developer. And my teammates.

The upshot of this is that I am (voluntarily) writing a quick spec of what I expect from a particular functional module in the project. Our original monster document never clarified how the end user will accomplish his goal via interactions with the system. But before we can come out with our long-awaited initial version, we need that clarification. Everyone seems to have a different idea of how the interaction should go, but I think that a good spec would enable us to discuss our ideas in a clear format.

29 September 2009

I can tell she is a born.... HUNGARIAN! (bravo, bravo, bravo!)

The title is for all you fans of My Fair Lady.

On to the actual topic: variable naming conventions.
http://www.joelonsoftware.com/articles/Wrong.html

Oh, wow. I really need to start following that blog!

(yes, I got totally distracted for about an hour. Check out this:
http://www.joelonsoftware.com/articles/fog0000000332.html)

(OK, make that 3 hours, and just subscribe to the above-mentioned blog. I did.)

Back to my original point: variable-naming.

When you get someone else's form, which you are meant to add code to, how do you know what (s)he named the controls? Let's say there are five textboxes, three radio buttons (in a single list), and two comboboxes (or drop-down lists, as you please).

Name ____________
Age ___
Weight ___ in O lbs O kilo O stone
Home town _____________
        State __|\/| (that's a combobox, OK?)
Favorite football team ___________|\/|
Favorite food ____________

Now then. I need to pull some data from the form (whether to validate it or for my own nefarious reasons). How do I know what the "Home town" textbox is called? What might it be?

I could go into the form and select the textbox, then look at the Properties box.

That means waiting:
1. wait for form designer to load (seconds)
2. wait for Properties box to load with my selected control (at least one more second)

Or I could make use of Intellisense by entering the first few characters of the control's name, with it prompting me with possible completions. This takes far less than a second... if the control's name is easily guessable.

If the textboxes have names like these:

UserName
Age
Weight
Units (for the list of options)

then Intellisense can only help me if I have a good guess of what my coworker had in mind. So is the control called City? Town? HomeTown? What about the combobox for the state? HomeState?

This is where Hungarian comes in. If you read the wiki on Hungarian notation, you'll find that this particular method of naming your variables has its detractors. Especially in the form I'm suggesting. More on that momentarily.

Hungarian notation means adding a prefix or suffix to your variable names (here, control names) indicating their type. Intellisense works with the first characters you type in, so I'm pushing for prefixes. Here are some Hungarian-named controls:

txtName
txtAge
txtWeight
radUnits

What does that do for me? If my team has been consistent about this, I don't care whose page I work on - I can get Intellisense to show me a list of all the controls of the desired type in three keystrokes (the first three characters of the variable name - that is, the prefix). Then I can choose from the descriptive remainder of the controls' names to get the actual value I want.

This form of Hungarian notation has been deprecated for being redundant - it doesn't add any information because the IDE already keeps track of variable types, and when you're writing a function the variables you want are either passed in as arguments or declared locally (as close to first use as possible, please, and no ten-page functions, and no global variables).

When we're dealing with a form on a designer, however, the story changes. Controls are not declared anywhere that is visible from the page of code you're working on; to view the original declaration, as mentioned above, takes more time than looking up or scrolling up. When you don't know the name of the control you want, only its type and the general meaning of its content or use, prefix Hungarian enables the use of Intellisense and makes life simpler and easier.

22 September 2009

Null Coalescing

I had seen this before, but now it hit me:

?? is an operator in .NET 2.0 and up.

http://weblogs.asp.net/scottgu/archive/2007/09/20/the-new-c-null-coalescing-operator-and-using-it-with-linq.aspx

So all those lines of code that look like this:

int myInt = (myNullableInt.HasValue) ? myNullableInt.Value : 0;

can now look like this:

int myInt = myNullableInt ?? 0;

Hurray!

16 September 2009

LINQ to SQL, LINQ to Entities

Just read a fascinating thread on the MSDN forums:

Very inefficient SQL generation in EF?


(although one of the posters there complained about how the link to this thread is not permanent...)

In the new world of .NET data access, it is no longer acceptable to deal with DataTables in the Business Layer of your application. DataTable is meant to bridge between the world of datastore (read: database) and business entities. In former days, Visual Studio's offering in the realm of closing the gap between data records and business entities was the strongly-typed DataSet (ADO.NET, if you please). When you added an SQL Server database to your project (this is still true of VS 2008), the IDE immediately started a wizard to generate a .xsd file defining classes derived from DataTable (and its relatives, DataRow and so on) that would be strongly-typed to match your database's schema. In essence, this wizard was about halfway to mapping your data from the hierarchal-relational model to the object model. It was one step short of having your records translated to full-fledged objects.

In order to achieve the full translation, a new mapper was needed: one that would translate your hierarchal-relational model to business entities. The disadvantage of such a mapper would be the way it limits querying: if the object you are talking to is not a table or view, you can't query it.

With the advent of LINQ, the story changed. Microsoft developed LINQ to SQL to fill the gap between table and business entity, and with the capabilities of LINQ, you can write code to query collections of objects. It's nearly as easy to write code in LINQ as it is to query tables of records in regular SQL. (Sometimes it's actually quicker to code in LINQ, but complex hacker-style queries are still best in SQL.)

The people who make Visual Studio actually published TWO object-relational mappers with VS2008: LINQ to SQL and LINQ to Entities. LINQ to SQL is heavily tied to SQL Server; LINQ to Entities is part of the Entity Framework, which is supposed to support many data providers.

It seems that the Entity Framework is still in the experimental stage, not yet market ready. The people on the forum linked above were unimpressed with the way LINQ to Entities translates your query into SQL. LINQ to SQL, on the other hand, rates pretty well on the performance end with payoffs in flexibility.

The bad news is that LINQ to SQL is no longer under development; MS sees the Entity Framework as its next-generation ADO.NET. The good news is that the LINQ to SQL team has been integrated into the Entity Framework team, so there's hope that at least the SQL Server implementation of LINQ to Entities will be as good as LINQ to SQL.

This being the case, LINQ to SQL will stick around a while longer, at least until the Entity Framework team manages to port the good things (read: provider-specific optimizations) from LINQ to SQL into their SQL Server provider.

15 September 2009

Version Control for the Server-Challenged

Some background: there are three of us working on a large project. We are not located in the same city; two of us are about an hour's drive away, given traffic, and the third is in a totally different region of the country. Each of us is developing web pages on her own machine at home, sometimes coming in to school and developing on her own (desktop) machine there. That means that at least six versions of the code exist (one for each location). Usually, we work on discrete sets of pages... but not always. For example, if I am working on localization and Liora is working on authentication, we each need to make small changes to ALL of the pages in the set.

The solution to our problem should be a version control server, like Subversion. If we were all using Visual Studio proper, we could even use an add-on called Ankh to integrate Subversion with the IDE. (I personally am using Visual Studio Express at home, so that won't work, anyway.)

The issue with this solution lies in where to store the repository - the central "original" from which all copies are checked out and to which all updates are committed. Subversion (and its client applications like TortoiseSVN and AnkhSVN) offers five options:
  1. local file
  2. http (on an Apache server)
  3. https (same as above, but secure)
  4. svn (on an SVN-enabled Linux server)
  5. ssh+svn (same as above, but secure)
Students are notoriously short on cash. The obvious result? We can't afford our own web server for this project. While we each have space on our school's web server (some of us even know how to access that space), we don't make the decisions of what software that server will run. I tested to see if our server is already running Subversion or CVS, its predecessor. It is running neither.

This being the case, how will we manage version control?

The only option left open to us is 1, the local file. Local where? Well, since this is my idea and I'm working on my home machine, local here. And then everyone else will need to send me their versions for periodic commits and updates. This is not optimal, because it turns the engineering problem (where the tools would force the appropriate action to ensure correct results) to a personnel problem (where with every additional worker, the chances of something grow wrong increase). Whip cracking might be in it.

Life's not fair. But we can get by.

09 September 2009

More Asynchronous Stuff

We've divided up the project, and I have selected a new (and exciting?) role for myself: the UI developer. Web UI, that is.

Taking a look around at popular sites, I can tell you already that I need to learn all about making good use of AJAX. Let me attempt to define it, based of course on an acquaintance of less that three weeks.

Traditional ASP and ASP.NET sites are heavy on the server-side processing. For any event you want to handle, the page needs to post back to the server and be loaded again. If your page is large, that's a lot of work. There are ways to pare down on page size (by making only the necessary controls ViewStateEnabled, for one thing), but the bottom line is a page reload for every click of a button. This means slow loading, flickering, and a choppy user experience.

Popular sites are developed by people who know what their audience wants. And while I am not yet a developer of a popular site, I can tell you right now what my audience wants: a site that loads quickly and that doesn't flicker. (Or hang up. Or crash their browser. More on those in a minute.)

Enter AJAX, a technology that was available as an add-in in Visual Studio 2005 and is a charter member of Visual Studio 2008. With AJAX, you can integrate Javascript asynchronous event handling into your page. The Javascript handlers asynchronously call server-side functions (exposed via .asmx web service or WCF - and one other method, for another post). The user continues interacting with a living, breathing page while the Javascript's spare thread waits for a response from the server; when the response arrives, the main thread updates the page accordingly and all is well. Mostly.

Note that I said the handlers asynchronously call server-side functions. The call is asynchronous; the server-side functions are not necessarily. What does this mean to my end-user? What does this mean to me?

The trouble comes in when the server response takes a long time. If many requests come in, and enough of them are taking a long time, then my server will run out of threads and my site will hang up. (Remember what I said? My users won't want that.)

The solution, my friends, is easier said than done: implement asynchronous calls where appropriate on the server side. For example, if my service function is doing I/O with the data store, the call to the DAL should be asynchronous. That way, the thread working on that service call can sleep instead of "busy waiting" and free itself up for another service call (maybe one that doesn't need any heavy I/O).

Sounds great in theory. What about in practice?

I think I'll go practice...

03 September 2009

WCF Timeouts

Did you ever search all over the web and only find the same wrong answer, over and over? I recently had this experience.

In an unrelated project, I was trying to set up a notification system using WCF. At some unspecified point in time, the web service would receive a notification for the client and would need a way to propagate that notification. Our thought had been to implement this with asynchronous calls to the service.

When using Visual Studio's very handy WCF creation environment, immediately after building your WCF app you can have another project in the same solution discover it. The client project can then choose to implement the WCF service function calls "asynchronously" (this is a checkbox in the configuration dialog for adding/changing a service reference).

Now, let's get this straight once and for all: the call to the service is NOT asynchronous. When you add in the asynchronous operations, what ACTUALLY happens is that the generated client class creates a new thread to handle the call, then has that thread notify your main thread when the call returns, so that the function call is asynchronous to your client program. But the actual call to the server is synchronous and will time out in a minute. Let me stress, in ONE minute.

The default operation timeout on a WCF client channel is 00:01:00. And contrary to popular misinformation, the settings in your .config file have nothing to do with this. The error message itself includes this line:

Please consider increasing the operation timeout (by casting the channel/proxy to IContextChannel and setting the OperationTimeout property)

That being the case, your "asynchronous" call has exactly a minute to be notified and return with the goods, or you are catching an exception. How to increase the operation timeout? By casting the channel to IContextChannel.

I admit that though my reading comprehension skills are at least average, I missed the tip in the error message and wasted hours playing with .config files, just as the forums suggested. Until I found this:

http://www.codeproject.com/KB/WCF/WCF_Operation_Timeout_.aspx

Yes, the error message had it right.

Since we're working with the autogenerated client class, it would be unwise to go into the (hidden) Reference.cs file that contains the ServiceClientFoo class. Note, however, that ServiceClientFoo is a partial class. So we can make our own file, call it ServiceClientFoo.cs, and open the class from there:


    public partial class ServiceFooClient : System.ServiceModel.ClientBase<CurrNamespace.ServiceFooRef.IServiceFoo>, CurrNamespace.ServiceFooRef.IServiceFoo
    {
        public IServiceFoo SetOpTimeout(TimeSpan timeout)
        {
            ((IContextChannel)base.Channel).OperationTimeout = timeout;
        }
    }

01 September 2009

DataList, Templates, and UserControls

Tair had a brainstorm this week: why not use a custom UserControl together with a DataList and an ObjectDataSource? When using a UserControl, we can set one property of the type of our business entity in the control, and in the setter of that property fill the child controls of the UserControl with the needed data.

Just to show that this can be done, I wrote up some stupid code. I'll put it up in a minute at cc.jct.ac.il/~fast/ObjectDataSrcExample.zip. Anyway, here's what I did:

Just as in the past, I have a Manager class and an Info (business entity) class in the BL. The Manager class has a Select() method that returns a strongly-typed List of my Info class.

namespace PizzaPlanet
{
    public class PizzaInfo
    {
        public string Size { get; set; }
        public bool Peppers { get; set; }
        public bool Olives { get; set; }
        public bool Mushrooms { get; set; }
        public PizzaInfo AllInfo { get { return this; } }
    }

    public style PizzaManager
    {
        public List<PizzaInfo> Select(List<PizzaInfo> SavedList)
        {
            return SavedList;
        }
    }
}

Note what's different this time: my Info class has an extra property AllInfo that returns this. Meaning the whole business entity. When you use an ObjectDataSource for databinding, it exposes the properties of every entity in the returned list, but not the entire encapsulated object. So it's convenient to have a property that returns this.

Now, here's the source code for my user control:
public partial class PizzaControl : System.Web.UI.UserControl
{
    private PizzaPlanet.PizzaInfo _internalPizza;

    public PizzaPlanet.PizzaInfo InternalPizza {
        get { return _internalPizza; }
        set {
            _internalPizza = value;

            if (_internalPizza.Size == "Small")
            {
                imgPizza.Width = 200;
                imgPizza.Height = 200;
            }
            else
            {
                imgPizza.Width = 400;
                imgPizza.Height = 400;
            }

            if (_internalPizza.Mushrooms)
            {
                if (_internalPizza.Olives)
                {
                    if (_internalPizza.Peppers)
                    {
                        imgPizza.ImageUrl = "~/images/mushroomOlivePepperPizza.png";
                    }
                    else
                    {
                        imgPizza.ImageUrl = "~/images/mushroomOlivePizza.png";
                    }
                }
                else
                {
                    if (_internalPizza.Peppers)
                    {
                        imgPizza.ImageUrl = "~/images/mushroomPepperPizza.png";
                    }
                    else
                    {
                        imgPizza.ImageUrl = "~/images/mushroomPizza.png";
                    }
                }
            }
            else
            {
                if (_internalPizza.Olives)
                {
                    if (_internalPizza.Peppers)
                    {
                        imgPizza.ImageUrl = "~/images/olivePepperPizza.png";
                    }
                    else
                    {
                        imgPizza.ImageUrl = "~/images/olivePizza.png";
                    }
                }
                else
                {
                    if (_internalPizza.Peppers)
                    {
                        imgPizza.ImageUrl = "~/images/pepperPizza.png";
                    }
                    else
                    {
                        imgPizza.ImageUrl = "~/images/plainPizza.png";
                    }
                }
            }
        }
    }
}


The control itself contains one Image, named imgPizza. All my .cs is doing is setting the toppings and size of the picture when InternalPizza is set.

Now for the actual form:
we have an ObjectDataSource with its TypeName set to my Manager class. The SelectMethod is set to PizzaManager's Select() method.

To make this sample easy to work with, I have the user enter new pizzas, which the order button saves in the Session, and then when the form reloads the ObjectDataSource passes the saved list from the Session to the PizzaManager (note that I disabled ViewState for all my controls).

The visible part of the form is a DataList. The DataList accepts ItemTemplates only (no BoundColumns or other predefined options here). So our ItemTemplate consists of exactly one control: the UserControl we defined.

(For the uninitiated, here's a quick list of what to do:
Add a DataList to your form.
Click on the side-arrow next to the new DataList.
Set the DataSource to your ObjectDataSource.
Click EditTemplates.
Select ItemTemplate.)

Normally, we would set the bound properties of the control contained in the Template via a wizard. When working with a UserControl, though, custom properties don't register in the wizard, so we're going to do this by hand. The InternalPizza property of our PizzaControl needs to be bound to Eval("AllInfo") -- remember, InternalPizza is of type PizzaInfo, and AllInfo returns the entire object. Here's the code from the .aspx:

        <asp:DataList ID="DataList1" runat="server" DataSourceID="odsPizza"
            EnableViewState="False">
            <ItemTemplate>
                <uc1:PizzaControl ID="PizzaControl1" runat="server"
                    InternalPizza='<%# Eval("AllInfo") %>' />
            </ItemTemplate>
        </asp:DataList>


That just about wraps it up. I think I'll go have lunch...

18 August 2009

More on Linux Partion Rescue

The story goes that Windows's partition manager messed up my Linux partitions, to the extent that I could not see the file system on them or boot from them.

After searching online, I found instructions here and here. Whew!

Here's what I did:

I loaded Parted Magic onto my USB drive with UNetBootin. I love that utility!

Then, following the instructions from the links above, I restored the superblock on my Linux drive from a backup superblock. So my file system was back!

The bad news is that Windows's partition manager changed the designation of my primary partition containing Linux (!?!), so that install of Linux won't boot anymore. But at least I recovered my data.

17 August 2009

Anti-Microsoft?

I know, this post hardly belongs on a blog that's supposed to be about ASP.NET and the cool tools that Microsoft provides. Bear with me.

I've been working on an unrelated job, and to that end I started playing with Linux. I know, the beginning of the end, right?

How is it that Linux is so sensitive to the possibility that you might be running another OS and Windows isn't? e.g., when I use the Linux utility to edit partitions on my hard drive, it makes me go back and exit Windows properly first (last time there had been an error returning from Standby). When I use the Windows utility to remove a partition that has NOTHING to do with Linux, it messes up my bootloader so that my computer won't boot up at all. The Windows install disk got Windows up and running again, at the expense of a new little partition on which I installed a new copy of Windows, and which can be deleted at my leisure.

In my search for the right utility to get Linux going again, I discovered this little gem. Called "UNetbootin", it actually would have been extremely helpful when I was installing Linux in the first place - it's a utility that lets you install a Linux installation .iso image or other bootable utility to a hard drive partition, or a USB drive, or a floppy, or a CD. The utility I have in mind for today is Super Grub Disk, a utility that will give me all the boot options for my system, not just the Windows ones!

09 July 2009

|DataDirectory| distress

The issue first came up in a project last semester, and again it haunts us:
when you place your .mdf file in one project (the DAL) and execute the application from another project (the UI) evil things happen. Namely, you get an exception:

System.Data.SqlClient.SqlException was unhandled by user code
Message="An attempt to attach an auto-named database for file blah blah blah\PL\myDB.mdf failed. A database with the same name exists, or specified file cannot be opened, or it is located on UNC share."

As I say, this is evil. The file myDB.mdf is right where I put it; when I go into Settings for my DAL project and open the nifty dialog box for my connection string, the "Test Connection" button works just fine.

But just you take a closer look at that error. When I compile my application, the system starts looking for my DB in the directory that contains my PL, not my DAL. What happened?

When I open the nifty dialog box, the path for my DB looks like this:

blah blah blah\DAL\myDB.mdf

The actual value of the setting in my DAL\Settings.settings file looks like this:

Data Source=.\SQLEXPRESS;AttachDbFilename=|DataDirectory|\myDB.mdf;Integrated Security=True;Connect Timeout=30;User Instance=True

Ha ha! Foul play! I can spend the whole day setting the location of the DB in the nifty dialog box, but in the bitter end the Settings.settings file translates the path of the parent project to a variable called |DataDirectory|, and at compile time it replaces my DAL project's path with the startup (UI) project's path for the value of that variable.

What's the right workaround?

1. The wrongheaded way would be to elbow my way into the Settings.settings and set the path absolutely to the location of my DB. Wrongheaded, because if I move my project around (even on my own machine), I have to elbow back in and update the location.

2. The easy way out would be to use SQL Server instead of a local data file. Which is of course what we're doing on our project.

3. I thought to try to place the DB in the parent folder of the projects - ie, in the solution folder. But that induced Settings.settings to hard-code the location. Not much better than solution 1.

4. Finally I found what I was looking for! by jumping from here to here to (at last!)

https://blogs.msdn.com/smartclientdata/archive/2005/08/26/456886.aspx

So I added the following line of code to a function that is executed before I try to access my data:

"AppDomain.CurrentDomain.SetData("DataDirectory", AppDomain.CurrentDomain.BaseDirectory.Replace("GUI", "DAL") );"

where GUI is the directory of my UI and DAL is the directory of my DAL and they are in the same parent (solution) directory.

But really DBs were meant to be on servers, and anyway 3 layers are 2 layers too many. So claims Microsoft.


Now considering how much trouble this has caused a good number of people since 2005, I think it's time for Microsoft to rethink its nifty features.

02 July 2009

Pile on the layers

Here's a quote from MSDN:

"Most ASP.NET data source controls, such as the SqlDataSource, are used in a two-tier application architecture where the presentation layer (the ASP.NET Web page) communicates directly with the data tier (the database, an XML file, and so on)."

This is a cute way of saying that all those nice data-bound ASP.net controls - the ones that are supposed to take a data source and then do all the work of Selecting, Inserting, Updating, and Deleting (ie, CRUD) for you - will not work in a 3-layered structure.

That was the bad news, except that we knew it all along.

Here comes the good news:

"The ObjectDataSource works with a middle-tier business object to select, insert, update, delete, page, sort, cache, and filter data declaratively without extensive code."

http://msdn.microsoft.com/en-us/library/9a4kyhcx.aspx

So what do we need to do in order to get the data-sourced controls to work with an ObjectDataSource and, by extension, a BL and DAL?

Creating an ObjectDataSource Control Source Object has some answers. We need to define a stateless class (no non-static members) to provide CRUD logic for the data to populate the data-bound control on our form. Optionally, this class can also provide functions to filter the data and sort it. Then, using the (very handy) wizard provided by the ObjectDataSource, we select the class with the CRUD functions and identify the parameters it needs to preform Select.

Very well, but what about the parameters for Insert, Update, and Delete?

We can define another class to represent a record/row in our data schema. (The TypeName property of the ObjectDataSource must be set to this class.) All properties of the class will be polled (by reflection, one presumes) in order to fill the data-sourced control, and will be filled in return for the Update function. By setting the ConflictDetection property of the ObjectDataSource, we can even decide how updates should be done:

OverwriteChanges will simply fill an object with the new values and pass it to the Update function.
CompareAllValues will fill two objects: the first with the old values, the second with the new, and pass both to the Update function.

Two items of note:
1. The wizard will only work if your latest working build contains all the classes and functions you want to use! In other words, Build your project before running the wizard.

2. The CRUD functions I'm describing here are in my BL. They themselves do NOT do the actual persistence to the DB; rather they massage the data as appropriate and pass it in the appropriate format to the DAL, which does the real work.

30 June 2009

Dude, where's my data access?

בהעמקת הכרותנו עם שכבת הממשק משתמש, אפשר כבר לראות את הצל של שכבת הלוגיקה המתקרבת. הגיע הזמן לקצת תכנון.

הנושא של היום הוא השאלה איפה לשים את הקוד שמפיק את שאילתותנו מול מסד הנתונים. החלטנו בשלב מוקדם יותר (ראה להלן) להשתמש ב
Linq to SQL
לשכבת ה
DAL.

LINQ to SQL,
כשגוררים עליו טבלאות ממסד נתונים של
SQL Server,
אוטומטית קורא ומפיק את היחסים בין הטבלאות שהגדרת במסד הנתונים






כששומרים את הקובץ
Linq to SQL,
מופק אוטומטית
מחלקה בשם
DataContext - הקשר-נתונים

ביחד עם כל המחלקות שצריך כדי להציג את הישויות שלך כאובייקטים, כולל יכולות שמירה וקריאה למסד הנתונים.


המחלקות שאנחנו תכלי'ס כתבנו בשכבת ה
DAL
מבצעות את הלוגיקה הכי בסיסית של
CRUD (create, retrieve, update, and delete)

על כל סוג של ישות בנפרד. אציין שככה לא צריך אף פעם לגזור מחרוזת
SQL.
ואם בנינו נכון את היחסים, אף פעם לא צריך לבנות שאילתת
Linq
שמשמתמש ב
join.
המחלקות שיצר לנו את מחלקת ההקשר-נתונים אמורים לעשות לנו את כל העבודה.

היופי בזה הוא בכך שאם יש לי ביד ישות "בנאדם" או אוסף שלהן, כל עוד שהוא מקושר למחלקת ההקשר-נתונים (ז"א, לא עבר תהליך סריליזציה - נושא לכניסה אחרת) אפשר "למשוך" על מספרי הטלפון של הבנאדם (עם זו פעם הראשונה למופע הזה, לפחות) ואז הוא יבצע שאילתא מול מסד הנתונים ויוציא את המידע הצריך.

והינה השאלה של היום: כמה מהמשיכה הזו נכון לעשות בשכבת הלוגיקה?

פרט זה, של מתי מתבצע השאילתא, זה לא ענין, אלא איזה קוד אחראי על ביצועו. לכן אם קוד מופק אוטומטית של הקשר-נתונים נמצא בשכבת ה
DAL
וקיימת שורה בשכבת הלוגיקה שגורמת לו להתבצע, האם שברנו אף חוק של תכנות ב3 שכבות? נדמה לי שלא. ואם נחליט בתאריך מאוחר יותר להשתמש בטכנולוגיה אחרת בשכבת ה
DAL,
נוכל לספק מחלקות שמחשפות אובייקטים מיוחסים כמו אלה של
Linq to SQL.
ושכבת הלוגיקה לא צריכה אף פעם לדאוג מתי נקרא המידע ממסד הנתונים.

-------------------------------------------------

As we dig deeper into the UI layer, the ghostlike form of the Business Logic layer (BL) is peering out at us from the future. A little planning is in order.

Today's issue is the question of where to place the code that generates our queries on the DB. We decided at an early stage (see below) to use LINQ to SQL for our Data Access layer (DAL).

LINQ to SQL, when you drag tables from an SQL Server database onto it, automatically reads and generates the existing relationships between your tables.








When you then save your LINQ to SQL class, called a DataContext, it autogenerates a code file with all the classes necessary to

1) represent your data entities in your program as business entities

2) persist your entities to the database




The classes we wrote for the DAL do the very barest CRUD logic (create, retrieve, update, and delete) on one entity type at a time. Note that with LINQ to SQL, we need not ever write a single SQL string. If we built the relationships correctly, furthermore, we need not ever write a single LINQ query with a "join" - because the DataContext should be doing that all for us.

The beauty of LINQ is that once I have a Person or collection of Persons, as long as it is still attached to the generating DataContext (that means it has never been serialized and deserialized... a topic for a different post), I can pull on Person.Phones and that will (the first time I call it for this instance) query the database for the phone objects related to the person at hand.

And so, the question: how much of this pulling can responsibly be done in the BL?

The detail of when the query is executed is not an issue, it's only a question of which code is reponsible for doing it. So if the autogenerated code for the DataContext is in the DAL, and a line in the BL causes it to fire and query the DB, have we violated any priciple of 3-layered structure? I don't think we have. And if we decide at some later date to use LINQ to Entities or some other mechanism in our DAL, we can provide classes that expose the related objects just as well as LINQ to SQL does. And the BL need never worry about when the data is read from the DB.

29 June 2009

Not a member? Join today!

הגיע הזמן להפיק גירסה ראשונית לממשק משתמש. לאפליקציה אינטרנטית כמו שלנו, צריך לשאול: איך נטפל בזיהוי משתמשים, דוגמת רישום וכניסה למשתמש מוכר?

ASP.net
אמור לטפל בנושאים כאלה עם מה שנקרא "חברות" (תרגום
.(membership

אם המתכנת הוא אצלן (זה בעצם יכול להיות דבר טוב) אז סביבת הפיתוח יכול להפיק לו מסד נתונים מוסתר בשם
aspnetdb.mdf
בתיקיית ה
App_Data,
ודרך קריאות למחלקה מוסתרת שומרת נתונים לתוכו. דף לוקלי פותחים ישיר מהתפריט של סביבת הפיתוח שנותן לנהל את המשתמשים המוכרים כבר בזמן תכנון.






משפחת פקדים
login
שמגיע מוכן עם
.NET 3.5
משתמשים בכל זה באופן ישיר. ומהקוד שלך, תוכל בקלות לבדוק האם המשתמש הנוכחי נכנס למערכת רשמי כמשתמש מוכר ופרטים נוספים מהסטטוס ה"חברתי" שלו.





מה בזה לא טוב לנו?

משפחת הפקדים הזו מבוססת על הכלת מסד הנתונים המוסתרת שם בדיוק בתוך שכבת הממשק משתמש, גבר שמונע ממך למשוך נתוני משתמש ממסד הנתונים שלך. בנוסף, זה שובר את מודל ה3 שכבות. סה"כ, לא בתוכניות המקוריות שלנו.

אפשר לתת לספק החברות (מחלקה שמנהלת חברות) מאיפה למשוך את הנתונים... אבל אז אתה משנה את ההגדרות לספק החברות שנמצאות ב
machine.config
ע"י הוספת עצמים ל
web.config
ואז מבנה הטבלאות שלך חייב להתאים לדרישות של ספק החברות המוכן. ויתר על כן, מסד הנתונים שלך חייב להימצא בשכבת ממשק המשתמש. עדיין לא 3 שכבות, עדיין לא מצא חן בעינינו.

במחיר הרבה עבודה מיותרת, נוכל לממש ספק חברות משלנו וגם (הודו לה') לשמור על מבנה ה3 שכבות וגם להשתמש בפקדים המוכנים. זה מוצא חן בעינינו, בגלל שלבסוף רוב הקוד שאנחנו כותבות לספק חברות שלנו מהווה פונקציונליות נחוץ כדי ליצור אתר אינטרנט עם זיהוי משתמשים מספקת. ולאור חוסר נסיון שלנו בזיהוי משתמשים, נשמח לקבל הגדרות קשיחות ממיקרוסופט.

נקודה אחרונה: פקדים ממשפחת
login
שוברות את
MVC.
אבל המטרה של דגם תכנון הוא להיות כלי, לא להיות מחסום - אז אם צריך, נשבור!


--------------------------------------------

The time has come to produce a version of the UI. For a web application like ours, we must answer the question: how do we handle user authentication (eg, registration and login) ?

ASP.net is designed to deal with these issues with what it calls "membership".

If the programmer is really lazy (this can be a good thing), ASP.net will generate an invisible database called aspnetdb.mdf in your App_Data folder and invisibly call on a class to persist to that database. A (local) web page that opens directly from the menu in Visual Studio allows you to manage the users from this database at design time.






The Login family of controls provided by ASP.net plug directly into this system. And from your code, you can easily check if the current user is logged in and other details of his "membership" status.





What about this don't we want?

The Login family of controls provided by ASP.net are based on inclusion of the (invisible) database right there in your UI layer, which means that user data is not coming from your own database. Additionally, this is not in keeping with the 3-layered look. Altogether, not part of our original plans. You can tell the membership provider where its database is... but then you're overriding the default provider datastore defined in machine.config by adding elements to web.config, and then your table structure has to be exactly what the canned membership provider is expecting. And, of course, the membership provider is in the UI, so your database must be there, too. Still not 3 layers, still pretty evil.

For the price of lots of extra work, we can implement our own membership provider class and (oh, joy) still preserve the 3 layers AND use Microsoft's Login controls. This is good, because ultimately the vast majority of the code we're writing for our customized membership provider is functionality we need in order to have a website with sufficient authentication. And, as we have little to no experience with authentication, if Microsoft is dictating what our authentication class needs to support, we're more likely to reach our destination.

A final point: the Login controls will never work with MVC. But MVC is a tool for us, not a prison - so when it serves us to break it, we will!

23 June 2009

Human Events: A Sequel to Yesterday's Post

After we took two months to generate documentation that was expected in a few weeks, our client seems understandably unwilling to meet with us.

Did we take too long? Of course. Why? Because the work was so difficult for us. Drawing up a large set of statecharts or use-case diagrams is not difficult work, and it's useful in helping the programmer think through the scenarios he needs to handle. Writing up a list of possible screens, possible behaviors, and possible messages is tedious, difficult work to someone who is used to modeling in a graphic language.

What would have happened if we had refused to write such extensive, formal documentation? Could it have been worse than the current situation? Maybe the situation now is salvageable. Maybe then it wouldn't have been? I think we should have negotiated a different form of documentation from the first. Now we're going to have to crawl...

22 June 2009

Human Communications

עם כל התיעוד שנאלצנו לכתוב, אני נזכרת בשני ספרים נהדרים:
The Elements of Style
מאת

William Strunk Jr, E.B. White,
The Visual Display of Quantitative Information
מאת
Edward Tufte.
ברצוני לדעת: האם באמת בכל המטרים של ניירת שאנחנו מפיקות יש מידע משמעותי שמגיע לצד השני? האם המסר שלנו ברור? האם אפילו אצלינו הוא ברור?

אומרים לסטודנטים (שהם אנחנו) שתיעוד הוא חלק חיוני מתהליך הפיתוח, ובמיוחד כשקיימים כמה גורמים עם מטרות שונות ובעלי רקעים שונים, דוגמת צורכים סופיים ומתכנתים. תהליך פיתוח נכון מבחינה זו, לטענת המרצים, ישפיע על הסיכויים שהפרויקט שלך לא יתקל בבאגים בלתי-פתירים בשלב מאוחר ביותר בפיתוח, וגם שלא יופק מוצר שלא יקובל ע"י הלקוח בגלל שהוא לא מספק את דרישותיו.

אבל האם זה נכון? עריכת התיעוד דרש לנו בערך חודשיים, בהם לא התקדמנו בתכנון או בקידוד. רוב הבעיה היתה בזה שהסגנון אותו דרש הלקוח למסמך שלנו לא תאם בכלל לסגנון החשיבה לו אנחנו מורגלות. יתכן שזה הדרך הכי יעיל לוודא שעובדים נכון? באיזה מכיר? ולאור העובדה שעקמנו בחשיבה כל כך כדי להפיק את התיעוד הזה, איזה סיכוי יש לנו להפיק קוד שיתאים לו?

כשהשיטה נהיתה כל כך אי-טבעי וכואב, אני משוכנעת שחייב להיות קיים שיטה טובה יותר

--------------------------------

With all the documentation we're being induced to write, I am reminded of two works: Strunk and White's Elements of Style, and Tufte's Visual Display of Quantitative Information. I wonder, in all the meters of ink on paper we're churning out, how much data is really getting through to the other side. I wonder if our message is clear. I wonder if we even have much message to deliver.

Documentation, we are informed as students, is a necessary part of the software development process where multiple parties with multiple interests and background fields are involved (eg, shareholders and programmers). Proper software development process, they tell us, will help save your project from being so error-prone as to be useless or so far from satisfying your end user as to be rejected.

Is this true? Documentation kept us away from design and code for about two months. This was mostly because drafting documentation to meet the standards set by our client company was completely foreign to our thought patterns. Can this really be the best way to ensure that we're doing it right? At what expense? And considering how far we had to bend our thought patterns to plan and design in this way, how likely is it that we will succeed in actually delivering a product that conforms with what we described?

When the method gets so painfully unnatural, I am convinced that there must be a better way.

21 June 2009

Introducing a Project

עכשיו כשאנחנו סופסוף מתחילות לקודד את הפרויקט הסופי (אפילו אם זה התחלה מוטעה, מתקרבים) ברצוני לשמור אזכרה לדברים המגניבים שנעשה. ויתר על כן, אני רוצה לשמור אזכרה לדברים המגניבים שלא נעשה לאחר שמנסים אותם, בגלל שדברים כאלה לא יופיעו לנו בפרויקט הסופי והם עדיין מגניבים.

קודם כל תיאור קצר של הפרויקט:
אנחנו מפתחות אתר אינטרנט לחברה קיימת (לא האתר הראשי שלהם) בשימוש מסד הנתונים והשרתים שלהם. לפי בקשתם, אנחנו כותבות מסמכים שונים בכל שלב התכנון, דבר שמאיט אותנו בצורה משמעותית למדי. לדעתי כדאי לנו לצמצם בדוקומנטציה בהמשך (?).
הדגש בפרויקט אמור להיות בחווית משתמש חלקה ושימוש יעיל בנתונים
(OLAP).

רשימת סיכום דברים מגניבים שעד עכשיו הסכמנו לא לעשות:
ASP.net MVC
EDM
Data controls

רשימת סיכום דברים מגניבים שככל הנראה כן נעשה:
MVC - מימוש שלנו, לא של מיקרוסופט
WCF
AJAX
אלגוריתמים - למצוא העדפות של המשתמשים

--------------

Now that we're really starting to code our final project (at least, even if it's a false start, we're getting very close) I want to start a record of the cool stuff we do. More importantly, I want to record the cool stuff we decide not to do after trying it, because those things won't show up in our final project but still deserve to be remembered.

A brief description of the project is in order:
we are developing a website for an existing company (not their flagship website), using their databases and their servers. By their request, we are writing documentation at intervals, but this documentation slows us down to a near-standstill. I think we should refuse to write documentation in the future, except in very limited measures. (?)
The emphasis of the project is supposed to be a smooth user experience and efficient use of data (by way of OLAP).

A summary of cool things we probably won't be doing:
ASP.net MVC - as we know nothing of unit testing, which seems to be its sole advantage
EDM - we still haven't figured out why we would want this instead of LINQ to SQL
Data controls - objects in the business layer vs easy binding in UI? Business layer wins

Cool things we probably will be doing:
MVC - implemented by us, not by the developers of C#
WCF
AJAX
Algorithms - to determine user preferences

Now let's see what my partners are up to with their parts of the DAL...