Sunday, November 27, 2011

Fail to Login Gmail via Email Client

If you suddenly is unable to log in onto your Gmail account, or your email client keeps prompting you for username and password due to invalid credential or Web Login Required, most likely your account is locked. In most cases, you are still able to log in directly with Gmail Web Login.

In order to unlock your account, use Google accounts' UnlockCaptcha. Everything should be reset in a second.

Reference:

Trouble Login onto Gmail: https://mail.google.com/support/bin/answer.py?answer=78754

Tuesday, August 9, 2011

Stay away from imageshack.us

How many do you use imageshack.us for image storage and sharing? I'd started using it lately and thought it was a good one. It turns out that it is disappointing. It forces unregistered viewers to register with them before they can view the images.

For unknown reason, I have no problem to view the page in the US and I am not required to log in before I view the images either. For some people, especially the ones who are living out of the states, most likely what they see in the page is a frozen frog saying Domain Unregistered. To view, register at bit.ly/imageshack-domain. It looks to me that imageshack.us forces unregistered users to register with them before they display the images.

I don't see the frog so I won't know if my friends didn't tell me. From their PDF files, every image in my page becomes a frozen frog! Therefore, I have to upload the same set of images to photobucket.com and then updated all image links on the pages. It took me a few hours to fix this problem.

If you are using imageshack.us for image storage and sharing, you would better move your photos to some place else unless you can ensure all your audiences can see your pictures. Otherwise, all your images may become frozen frogs that confuses your users.

Updated on Aug 23, 2011
I am currently travelling in Hong Kong. The above finding is 100% confirmed. All pictures stored in imageshack.us will turn to be frogs when the page is accessed outside the USA. All images are viewable only when the viewer has an account with imageshack and the viewer must log onto the account.

Saturday, August 6, 2011

How to Disable Dell Mouse Stick Pointer on Windows 7

I personally find this Dell mouse stick pointer very annoying. It always messes up my typing. Thus I always had it disabled. All of a sudden, today it is enabled. It turns out that Dell Touchpad application no longer exists in my laptop. I don't understand how and why this could happen. It could be related to Windows update. In order to disable it, I have to re-install Dell Touchpad application. After the reboot, Dell Touchpad will show up in Mouse Properties so that I can configure it. The following is the procedure of how to disable this mouse stick pointer.
  • If you don't have Dell Touchpad application installed, go Dell Support site to download and install it. Most of time, a reboot is required after installation.
  • Go Control Panel.
  • Select All Control Panel Items.

    All Control Panel Items

    Or type mouse at the search box and then select Change mouse settings.

    Change mouse settings

  • If you have Dell Touchpad application installed, Dell Touchpad tab will show up. Click on the image inside the Dell Touchpad tab content.

    Dell Touchpad

  • Select Button Settings from Dell Touchpad window.

    Button Settings

  • Select Device Select tab and click Disable radio button of Pointing Stick and then click Apply.

    disable pointing stick
  • The mouse stick pointer will be disabled instantly. Click OK and then exit the Dell Touchpad application.

Note that the above procedure/pictures are captured from my Latitude E6510.

Friday, July 29, 2011

CarMax, 512SellCar.com, copart.com and IQ Auto Buyers Review

CarMax

No emails or phone calls are involved in the selling process. Car owners must visit the office in person for physical inspection and a firm appraisal with an offer will be issued and printed after the inspection. The offer only lasts for 7 days. An extension for a few days is possible but re-visiting the office to obtain a new document is required. The inspection is about 30-40 minutes while extension could be around 15 minutes. If the offer is accepted, CarMax will issue a bank draft that guarantees the fund is ready for collection. The bank draft definitely provides peace of minds to people who leave the states or return to their countries. They can ensure that the money is available and waiting for them to collect when they are out of the states. After the selling process, you have to arrange your own transportation home.

IQ Auto Buyers

IQ Auto Buyers purchase cars nationwide and offers field inspection. But recently they have their policies changed without notifying their customers. They don't offer free inspection in the field until you are willing to drop off your vehicle to their local office. For the details, see this post. Thus, what the process advertised in their site is slightly different from what they promise. They will issue a company check to you if you accept their offer.

copart.com

copart.com originally bought salvaged/totaled vehicles that were still running and then sold parts to car dealers. And now they become a well known online auto auction site. They are also a publicly traded company, NASDAQ:CPRT. They do buy cars directly from owners. Their processing service is very simply and their response is very fast. There is no physical or visual inspection. Everything is based on your description of your car conditions. They take your words for it. Because of this, their offer is ridiculously low. It looks like you are selling them a salvaged car. Towing fee is not included. It means if you accept their offer, you have to pay the fee and they will arrange the service for you to tow your car to their office.

512SellCar.com

It is a brand new online office of Continental Automotive Group (CAG) which manages all online nationwide transactions. 512SellCar.com is less than 6 months old in the market. Every auto dealership belonging to CAG is currently based in Austin, Texas. As soon as you submit a request via their site or talk to them over the phone, they will send you an email to confirm your request accepted along with the sale agent information. The email will also go with the local dealership logo instead of 512SellCar.com. The domain name used in the email address is from some place else (currently motosnap.com). It looks like a spam mail! Then two subsequent follow-up emails will be sent within the next 4 days. The sale agent will contact you in a day or two. A verbal quote is usually given over the phone based on your description and the basic information collected from your VIN number so that you have a rough idea how much your vehicle is worth. If you persuade further, a visual inspection is required. You either visit their local dealership office or meet the sale agent somewhere for appraisal. Unlike CarMax, no written appraisal document is issued unless you ask for. Their offer currently is valid for 2 weeks. If you decide to sell your car to them, the local dealership will write you a company check right away. Unlike CarMax, no sale document is issued in print saying that your car is sold to them. The only proof document is your receipt of the check; it has a brief description of the purchase by them. The sale agent will then give you a ride home if needed. Currently the service provided by 512SellCar.com is as good as CarMax. The process is a lot simpler. The price may beat CarMax

Personal Experience

In some cities or states, certain types/models of vehicles are overstocked or cannot be sold. Thus, if you walk in a local small dealership and they tell you that they have # of vehicles like yours, they are not interested in having your car. They just put their words in a polite way and hint you out. The price for cash or trade-in offered by them is definitely unreasonable and low. You should walk away. I am unfortunate living in those cities or states. I tried to lower my price, nothing helped. AutoTrader.com isn't helpful either. Indeed, it is wasting subscription fee. I conclude that the problem is not the price but if there is someone who is interested in your car. In my case, I put a trade-in price offered by a well-reputable dealership for more than a month in my ads. I didn't receive a single query. Then I changed my price to what I cashed out from a dealership. The same situation remains unchanged. Therefore, if you can't hear any response from your ad for some time, you have to do some research and reach out some other options.

Summary

There is no doubt that it will be awesome if you can cash your car from a private party. But in some occasions, it may be impossible, especially for people who would like to get rid of their vehicle in certain time frame. To them, selling a car to a private party may not be a good option. In addition, nowadays it is very difficult to sell a car unless you own a decent Japanese car that everyone wants.

Depending on where you live, if CarMax is located in your city, it will be your last resort. For sure, they are happy to take your car regardless of its condition. Indeed, they issue a bank draft instead of a company check, which really gives everyone peace of mind. Before doing so, you should try other options first.

512SellCar.com is new. They are as good as CarMax but they may not have their local dealership or office to where you live. Since they just start the Internet service and still provide field inspection, you may consider giving them a call for details.

You may want to talk to IQ Auto Buyers if they have an office in your location. Otherwise, you can simply forget about it because they no longer provide inspection in the field unless you don't mind driving hundreds miles away to their office.

You should forget about selling your car to copart direct if you have a decent car because their price doesn't make any sense.

Good luck to everyone who is trying to sell his car.

Tuesday, July 26, 2011

I really wish there was a good alternative to Craigslist

If there is something better than Craigslist, I would rather not go Craigslist.

In the past, I liked Craigslist but not any more, least to say many spams. Fortunately, I am not an active seller or buyer. I only used it lately. For the past 4 years, I didn't go Craigslist at all. As soon as I used it, I found quite a few of problems with Craigslist and have begun to understand why a lot of people are getting frustrated. I would not recommend Craigslist to everyone if he or she could find a better alternative. If your neighborhood has a community Website, you all should promote your community site and try to use it first to eliminate frauds, especially frustration from Craigslist.

Recently I've tried to post something to Craigslist, their phone verification drove me nuts. Their application is not reliable. And Craigslist support is very poor. Most likely you won't get a solid answer or help from them. The verification may work for some people but some may never get through without a reason. It doesn't support VoIP, which kicks out a lot of honest and legitimate users. With the phone verification, Craigslist may eliminate some spammers from the seller-side but there are plenty from the buyers. All sellers are in great risk while the buyers are somehow protected. I also had to use Google Voice phone number for ad instead of email. Otherwise, my email account would be full of Craigslist spams.

Using Craigslist could make you more frustrated than ever. If you hang around their help-desk discussion forums, you would find that there are a lot of people posting the same issues over and over again such as phone verification, ads not shown, trouble with re-posting and something similar. Those questions could be easily resolved if Craigslist app could have alerted and explained why their posts fail when the users place their ads. Currently, Craigslist does nothing; instead, it lies and tells the users that their posts are successful and asks them to wait for a few minutes for update. Unfortunately, those posts are automatically ghosted or flagged as soon as the ads are posted; thus their ads would never be shown or displayed in the search due to prohibited content, violation, unsupported hidden characters or other issues. According to their forums, it is one of their ways how to eliminate spams. To me, it is wasting time and $$. Those supports are answering the same questions daily again and again. Why do I hire people answering the same questions over and over again while there is a better way to handle the issues? I could be wrong about it and don't see the whole picture. In addition, the languages used by those supports sometimes are very rude and offensive [see one of the thread discussions below]. The worst is that Craigslist forums don't welcome user discussions and discourage users to discuss any Craigslist issues while there are a lot of unresolved issues. If you do any discussion in their forums, you may be flagged with thread hijacking without a valid reason and then possibly reported to the staff. I wonder if the users would care about this when they get frustrated. Why not let them to vent the angers? Indeed I don't understand why user discussions becomes an issue to Craigslist. Or is it just a makeup rule by those supports who are running the show?

Luckily I don't have to deal with those issues but I feel the pain for the people who are still bothered by those issues, plus fraudulent activities are still active on Craigslist. Nowadays there are more and more negative comments on Craigslist than positives. I hope that there will be soon a better site than Craigslist or Craigslist begins to listen to their users.


Here is one of the examples extracted from Craigslist forums. Those guys with channel names <FeatureNotABug_>, <__:)__> and < - > are the supports. The thread was posted and started by a user called < jo27 >

[ Full discussion on Craigslist forum ]

 Post Active not showing in Search < jo27 > 07/26/11 16:12  
      : . . your ad < - > 07/26/11 16:13  
      : . . : . . anybody else not as mean want to assist? < jo27 > 07/26/11 16:15  
      : . . : . . : . . your ad < - > 07/26/11 16:17  
      : . . : . . : . . : . . since this man is disrespectful, anyone else? < jo27 > 07/26/11 16:18  
      : . . : . . : . . : . . : . . so sorry for the troll, we < FeatureNotABug_ > 07/26/11 16:20  
      : . . : . . : . . : . . : . . : . . anyone for real on here besides these 2 clowns? < jo27 > 07/26/11 16:22  
      : . . : . . : . . : . . : . . : . . : . . Ignore, Flag and ignore § < getaliferudeass_ > 07/26/11 16:24  
      : . . : . . : . . : . . : . . : . . : . . : . . So can anyone help me with my post? < jo27 > 07/26/11 16:26  
      : . . : . . : . . : . . : . . : . . : . . : . . : . . Event services is for those < __:)__ > 07/26/11 16:30  

There are plenty of this example found in Craigslist forums. When I read it, I just cannot believe my eyes.

Thursday, June 30, 2011

How to change the Gateway Metric on Windows 7

If there are multiple physical adapters present in the network, Windows 7 will always look at and compare the indices of gateway metric among the physical adapters and then pick the one with the lowest index to use. In order to override the default settings, you need to adjust the index of gateway metric for each physical adapter. The adapter assigned with the lowest index will always take precedence and used by Windows 7 automatically. For instance, if you want to use the wired connection when both wired and wireless are available in your network. You need to assign the lower gateway metric index to your LAN card so that you can ensure that your favorite adapter will be used by Windows 7 whenever it is available. In my previous post, I mentioned how to use route change command to adjust gateway metric index. In this post, I will present you an easy way to do it without going to the command prompt.

  • Open Network Connection from Network and Sharing Center. Or type ncpa.cpl in the search box at your Windows Explorer or Start menu.
  • Select your favorite connection, e.g., Local Area Connection and then right click to select Properties.
  • In the Networking tab, select the Internet protocol version, e.g., Internet Protocol Version 4 (TCP/IPv4).
  • And then click Properties button.
  • In protocol Properties dialog box, click Advanced....
  • Inside the Advanced TCP/IP Settings, at the IP Settings tab, click Add... of the Default gateways.
  • Uncheck the Automatic metic checkbox, and then enter your router IP (e.g. 192.168.1.1) and assign your metric index. Click Add to insert the entry.

    Default Gateway Settings
  • Click all OK's to exit.

The changes will immediately take effect. If you check your route table after changes (using route print command), you'll find a new entry in the Persistent Routes.

===========================================================================
Persistent Routes:
  Network Address          Netmask  Gateway Address  Metric
          0.0.0.0          0.0.0.0      192.168.1.1      20
===========================================================================

Regardless of the IP address obtained automatically or statically, changing gateway metric can be done via either route change command or network connection GUI (ncpa.cpl).

When you use route print to verify your settings, the metric indices usually will double the number you enter.

Use netsh int ip show config will show the exact settings you will find in the network connection properties.

I hope you'll find this information useful to you.

Wednesday, June 29, 2011

Forcing Windows 7 to use wired when available

For unknown reason, Windows 7 prefers using wireless connection over wired. In order to force Windows 7 to use the wired connection when available, you need to adjust the setting of gateway metric among network adapters. A lot of posts found online recommended to do it via via Network Connection (ncpa.cpl) by unchecking Automatic metric checkbox and manually setting Interface metric on each network adapter. Unfortunately, this only updates interface metric, not gateway metric. Thus, it won't have any effect on Windows 7 and the problem persists.

You can type the following command at the command prompt for the detail of your network adapter settings [see example]:

netsh int ip show config

Or type the following for the settings of the route table. In this command, the metric column is only for gateway metric. [see example]

route print

To change gateway metric, there are two options. First is to use route change command at the command prompt. For example,

route change 0.0.0.0 mask 0.0.0.0 192.168.1.1 metric 20 if 13

where

  • 0.0.0.0 is the target network destination (IP address) found in route table.

  • mask 0.0.0.0 is the subnet mask associated with the target network destination.

  • 192.168.1.1 is the IP address of the gateway, my router.

  • metric 20 is setting gateway metric to 20. The network interface with a lower metric takes precedence. In this example, I am assigning 20 to my wired network card, which receives the lowest gateway metric. It enforces Windows 7 to use the wired whenever it is available. Also see KB299540.

  • if 13 means to apply the changes only to the network interface index equivalent to 13. In this example, 13 is my Intel(R) 82577LM Gigabit Network Connection that can be found in the Interface List section of route print.

There is no need to log out or reboot. The changes should take effect immediately.

The second option presented in my next post may be the preferable way, especially if you don't want to execute any command. Go and see my next how-to.

References:
The meaning of metric numbers, see KB299540.
How to use Route Command.

Example of netsh int ip show config

Configuration for interface "Wireless Network Connection"
    DHCP enabled:                         Yes
    IP Address:                           192.168.1.2
    Subnet Prefix:                        192.168.1.0/24 (mask 255.255.255.0)
    Default Gateway:                      192.168.1.1
Gateway Metric: 25 InterfaceMetric: 50
DNS servers configured through DHCP: 192.168.1.1 192.168.1.1 ... ... Register with which suffix: Primary only WINS servers configured through DHCP: None Configuration for interface "Local Area Connection" DHCP enabled: No IP Address: 192.168.1.200 Subnet Prefix: 192.168.1.0/24 (mask 255.255.255.0) Default Gateway: 192.168.1.1
Gateway Metric: 256 InterfaceMetric: 20
Statically Configured DNS Servers: 192.168.1.1 ... Register with which suffix: Primary only Statically Configured WINS Servers: None ...

Example of route print

The column of Metric in the section of IPv4 Route Table is gateway metric.

===========================================================================
Interface List
 13...5c 26 0a 23 40 d5 ......Intel(R) 82577LM Gigabit Network Connection
 14...00 24 d7 6c a6 fc ......Intel(R) Centrino(R) Ultimate-N 6300 AGN
 15...00 24 d7 6c a6 fd ......Microsoft Virtual WiFi Miniport Adapter
 10...5c ac 4c fd 7b 5e ......Bluetooth Device (Personal Area Network)
 16...00 50 56 c0 00 01 ......VMware Virtual Ethernet Adapter for VMnet1
 17...00 50 56 c0 00 08 ......VMware Virtual Ethernet Adapter for VMnet8
  1...........................Software Loopback Interface 1
===========================================================================

IPv4 Route Table
===========================================================================
Active Routes:
Network Destination        Netmask          Gateway       Interface  Metric
0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.200 266 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.2 50
127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.200 266 192.168.1.0 255.255.255.0 On-link 192.168.1.2 281 192.168.1.2 255.255.255.255 On-link 192.168.1.2 281 ... ... ... ... ...

Monday, June 20, 2011

How to attach CKEditor keyup event

My current version: CKEditor 3.6

  editor.document.on('keyup', function(evt){...});

I am trying to attach a keyup event to CKEditor. Unfortunately, it doesn't fire as soon as the setData() is being called.

It turns out that the editor.document is recreated every time setData() is called and thus all the events attached to the editor.document will be gone forever. Re-attaching the event to the editor.document is necessary. Luckily, setData() supports callback.

  editor.setData('set some data', function(evt) {
    editor.document.on('keyup', myKeyupEvent);
  });

I don't understand why CKEditor won't support keyup event in editor.on. You can't do this:

err editor.on('keyup', function(evt){...});

But for keypress or keydown event, you can do this:

  editor.on('key', function(evt){...});

There isn't much details about the CKEditor event. Their API isn't helpful either. Reading through the event API, I still have no idea when I should be able to use editor.on to attach events and when I should use editor.document.on.

What I understand is that editor.on is good for custom events. You write your own event, hook it up and then fire it up yourself as needed. The events registered by editor.on will not be removed after setData() has been called. It means that there is no need to re-attach the event even though the editor.document is re-created.

For events registered by editor.document.on, they support all existing defined events as long as you provide a event handler for them. The drawback is that the event handler requires registration again after editor.document has been recreated.

Currently, I have no custom events to be defined; thus, I found editor.document.on more useful than editor.on. I can hook as many events I want without restrictions.

Wednesday, June 8, 2011

Posting an Ad to Craigslist now becomes more difficult or frustrated

I haven't posted and purchased anything from or to Craigslist for a very long time. On my mind, Craigslist was a great service for local people to get good deals. Unfortunately, a lot of innocent people got spammed; sadly stories happened one after another. A few years ago Craigslist started implementing posting verification by forcing people who post classified ads to the USA or Canada for phone verification. And each successful verified phone number is good for 90 days. To consumers, it is a good thing; it certainly eliminates a lot of spammers, especially the ones from oversea. However, this control along with others makes a lot legitimate posters (like marketers) extremely inconvenience. A lot of them look for and turn to other alternatives such as backpage.com, ebayclassifieds.com, and etc. Sadly the alternatives traffic is still not as much as Craigslist.

I am not a marketer but sometimes I may have things for sale or giving away, like everyone else. The first service pops to my mind is Craigslist. I was trying to post an ad since yesterday morning without success. None of my phones could be verified.

First, I was trying to use magic jack phone number for verification. It turns out that Craigslist won't accept any VoIP phone. It means I cannot use Magic Jack for verification. Then I entered my cell phone number. Unfortunately, the page kept sayingYou are submitting telephone verification requests too rapidly. To prevent abuse, we require users to wait 5 minutes between requests, use no more than 3 telephone numbers in a 12 hour period, and not use any single phone number more than 3 times in a 12 hour period. Please wait and try again later.” At first, I believed in the error message. But after having waited for a few hours and then tried again, I concluded that something must have gone wrong. Reporting an error to Craigslist is no use. I received the same explanation that I could find in their phone authentication help page. What could I do? So I put this aside for the day and tried again in this morning. The same error message re-appeared. It was more than 12 hours waiting period. I cannot believe that I still received the same error. It implies that their phone verification service is not reliable. It could be why it frustrates a lot of marketers, besides other issues.

Craigslist phone verification process is even more difficult to follow than using a ATM card at an ATM machine when you're trying to remember your passcord. For ATM, at least I could try 3 times physically. For Craigslist phone verification service, the 3 times attempts are very easily to exceed without your knowledge.

You cannot refresh the page or use the same URL to another browser window/tab. If you do, it will count once because the page itself will remember your previous phone number. I am not talking about the browser cache. It is nothing related to it. Your URL is tied to your email address. When you enter a phone number, it will automatically tie it to that phone number as well. It is done in the craigslist backend server, not from the browser. Thus, if you have an issue on this, don't refresh, or cut-and-paste the URL to another browser to try again. You will quickly lose your two counts. Unfortunately, my issue is not related to this.

In order to resolve this issue, I decide to sign up an account with Craigslist. From there, it automatically detected a problem saying “There is already an account associated with this email address.” and offered me a chance to reset my password. I have never signed up an account with Craigslist, so I guess it could be a temporary account for non-registered user when I tried to post an ad. Obviously such a temporary account is required for their phone verification service. I followed the link to request password reset. Then I received an instant email for password reset. I finally have my password reset and registered an account with Craigslist.

Having an account with Craigslist doesn't mean I exempt for phone verification when I post an classified ad. To me, this password reset or the registration process cleared all errors including every single field of my phone number stored in their back-end server. From that onward, I could re-enter my cell phone number for verification to the page. Unfortunately, my request was expecting a text message / SMS from Craigslist but somehow Craigslist failed to deliver. When I went to check the sending code status, it indicated that “the call has been placed.” Call? I was requesting a text message, not a call! So I waited for another 5 or more minutes and then placed a request for a voice message instead of text. My cell phone rang as soon as I hit the "send the code!" button. My ad was finally posted.

This whole verification process with Craigslist negates my impression with them. I hope that Craigslist will improve this process. When the alternatives begin improving their traffic, people may leave Craigslist if this verification process is still a headache. Right now, Craigslist still holds a solid position in the market place.

Monday, June 6, 2011

Passed 70-515

As of today, I am certified for ASP.NET 4.0.

Learning from my previous experience, yesterday I drove to the test center and checked it out first. I don't understand why the same test center keeps moving from place to place! Every time I take an exam with them, they move. This time I ensure I know how to get there so everything was smooth. As usual, I was there an hour ahead and waiting outside for the time came.

There are 51 questions in total for 2.5 hours. I finished them about 1.75 hours. I am happy with my score. I could miss 1 or 2 questions. That's. My score is 970 out of 1000. The passing mark for this test is still at 700.

For some reason, I feel 4.0 is a lot easier than 3.5. But I am surprised to be tested by some jQuery questions. There were at least 4 questions in my test. Frankly, I don't find them related to ASP.NET. To me, it doesn't make any sense to have them in the test. jQuery is not mentioned in MS testing objective. Why am I being tested with jQuery syntax? Luckily I use jQuery and am able to answer them. Otherwise, it could be doomed. Besides MVC 2 and dynamic data stuff, I don't see any new stuff in the test. If you know 3.5 (including SP1) well and put some effort to learn more about MVC 2 and dynamic data, for sure you will be ready for the test.

Good luck to everyone who is preparing or going to take this test!

Thursday, May 19, 2011

WCF and Interface

Sometime ago, I had a few WCF services without using interface. As time goes by, some functionality need to expand. To avoid exposing all the service functionality to a single endpoint, implementing multiple interfaces is necessary so that multiple endpoints can be configured based on interface. Unfortunately, with this change, it broke every client call (JavaScript) in the old code.

In the code, the JavaScript function calls were all used and based on the library stub automatically generated by ASP.NET where the page uses ScriptManager to manage MS AJAX library. Thus the change at the contract name specified in the Web.config will automatically refresh the changes into the library.

Previously, there was no interface involved. The contract attribute of the endpoint defined in Web.config is directly pointing to the name of service itself. For example,

 <system.serviceModel> 
  
  <behaviors>
    <endpointBehaviors>
      <behavior name="Shipment.Order.WebAspNetAjaxBehavior">
      <enableWebScript />
  </behavior>
    ....
  <service name="Shipment.Order">  
     <endpoint address="" 
           behaviorConfiguration="Shipment.Order.WebAspNetAjaxBehavior"
           binding="webHttpBinding" 
           contract="Shipment.Order" />
  </service>  
  ...  
</system.serviceModel>  

Now, with the interface; the declaration of the endpoint contract is the interface instead of the service itself.

  <service name="Shipment.Order">  
     <endpoint address="" 
           behaviorConfiguration="Shipment.Order.WebAspNetAjaxBehavior"
           binding="webHttpBinding" 
           contract="Shipment.IOrder" />
  </service>  

In the page, the original JavaScript would call the service operation like this:

    Shipment.Order.Confirm(myId);

In order to work with the interface approach, the JavaScript has to use the interface to make a call:

    Shipment.IOrder.Confirm(myId);

Since the service has been placed into service sometime ago, the change to use interface is not a good idea. It affects a lot of pages that use this service. Instead of making the existing service to implement an interface, I derived a subclass from it so that the subclass inherits whatever its parent has. Then I simply configure an endpoint for this subclass instead of configuring multiple endpoints for the existing service. As a result, it works like a charm!

If you are having a problem with the interface or keep getting JavaScript error such as “xxx is NULL or not an Object” or “Cannot call method 'xxx' of undefined”. You would better look at the method you use in the JavaScript. It must be changed to use the interface because the JavaScript stub generated by ASP.NET is based on the contract defined in the Web.config. You could change the namespace by declaring [ServiceContract(Namespace = "xxx")] at your interface/class but your contract name won't be changed. You can examine the JavaScript stub by appending "/js" to the end of the service URL to compare the differences.

Wednesday, May 11, 2011

Alias created by cliconfg.exe (32-bit / 64-bit) don't work on 64-bit Windows 7

cliconfg.exe can be found in two locations at a 64-bit system when sql client tools are installed:
- [32-bit] C:\Windows\System32\cliconfg.exe
- [64-bit] C:\Windows\SysWOW64\cliconfg.exe

I wonder if anyone experience this.

Recently, I have moved my development machine to 64-bit windows 7 but I still have one project database running on 32-bit XP; the DB server is SQL 2005.

At my development box, I have SQL 2008 installed. I tried to use both 32-bit and 64-bit cliconfg.exe to create an alias but I cannot connect to that DB server on XP. In the SQL Server Configuration Manager, the alias created by both 32-bit and 64-bit cliconfig.exe are only shown up at "SQL Native Client 10.0 Configuration (32bit)" category.

[Updated on May 15, 2011] As a matter of fact, the entry is correct because the SQL2005 is running on 32-bit XP. Thus, the entry only shows up at 32-bit category. Indeed, this entry can only be used in .NET connection string for connection. Unfortunately, it cannot be used by SQLCMD (by default found at "C:\Program Files\Microsoft SQL Server\100\Tools\Binn") for remote connection. It was what I did last time. It failed.

If I use the non-32-bit client configuration in SQL Server Configuration Manager to create an alias, then the remote connection immediately works.

[Updated on May 15, 2011] Last time, I didn't check with .NET application. I only used SQLCMD -S <alias> for connection. Indeed, this entry only works for SQLCMD but fails for .NET remote connection.

In the registry, the one created by both cliconfg.exe will have an entry at

   HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo

The one created by non-32-bit client configuration in SQL Server Configuration Manager will instead have an entry at the following registry:

   HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\ConnectTo

Currently I don't have time to investigate it. But I just wonder why the alias created by either 32-bit or 64-bit cliconfg.exe utility won't work. I thought each version of cliconfg.exe maintains a separate list of alias in the registry. Why are both currently pointing to the same registry?

[Updated: May 15, 2011] The best reason to explain why both two 32-bit and 64-bit cliconfg.exe utilities create the same alias & registry entries is because the SQL server running on XP is 32-bit. The alias created by the non-32-bit client configuration in SQL Server Configuration Manager won't work for .NET but SQLCMD. Only the entry shown up at "SQL Native Client 10.0 Configuration (32bit)" category works well in .NET ConnectionString. Again, the main reason of my case is that the SQL server is running on 32-bit.

Thursday, April 14, 2011

URL Routing Causing to Load Login.aspx and SyntaxError

It is very strange. After the URL Routing module had been added into the non-MVC Web application, the application kept trying to load Login.aspx into the pages that don't need authentication and cause other syntax errors.

Uncaught SyntaxError: Unexpected token <
Register.aspx:63Uncaught Error: ASP.NET Ajax client-side framework failed to load.
Login.aspx:3Uncaught SyntaxError: Unexpected token <
jsdebug:1Uncaught ReferenceError: Type is not defined
Uncaught SyntaxError: Unexpected token <
Login.aspx:187Uncaught ReferenceError: WebForm_AutoFocus is not defined

After a few hours searching in code, I realized that the problem was caused by WebResource.axd. All the scrips for WebForm validation, focus, post back and etc were replaced by Login.aspx. WebResource.axd were instead trying to load Login.aspx.

Another interesting is that I duplicated some of user controls and pages to another temporary project for investigation but I am unable to re-produce the same behavior. With the same routing table, the same Web.config and a scale-down Global.ascx, Login.aspx won't be loaded when it is not being asked for. All JavaScripts are loaded as expected by WebResource.axd without problems. I still wonder what settings or events in the original application cause this problem.

Since this problem is caused by routing, one way to fix is to bypass the route. Adding RouteTable.Routes.RouteExistingFiles = true; won't fix the problem. Instead, a specific rule for ignoring a route is needed.


// For .NET 4.0
//RouteTable.Routes.Ignore("{resource}.axd/{*pathInfo}");  

// For .NET 3.5 SP1
RouteTable.Routes.Add(new Route("{resource}.axd/{*pathInfo}", new StopRoutingHandler()));

Tuesday, April 12, 2011

DateTime.Parse problem

Some of DateTime are taken from the user input and they are instantiated by the simaple DateTime constructor new DataTime(year, month, day);. When they are serialized into XML, the string will become like this:

   2001-07-08T00:00:00   

I tried to parse it back to DateTime from a XML value string. Unfortunately, I tried every method I know but kept getting String was not recognized as a valid DateTime error. For example,

  
    // Every method here causes String was not recognized as a valid DateTime error.  
  
    DateTime d = DateTime.Parse(query.Element("joinDate").ToString());
    
    string dateStr = query.Element("joinDate").ToString().Replace("T", "");
    DateTime d = XmlConvert.ToDateTime(dateStr, "yyyy-mm-dd hh:mm:ss");
    
    string dateStr = query.Element("joinDate").ToString().Replace("T", "");
    DateTime d = DateTime.ParseExact(dateStr, "yyyy-mm-dd hh:mm:ss", null);  
    
    string dateStr = query.Element("joinDate").ToString().Replace("T00:00:00", "");
    DateTime d = DateTime.ParseExact(dateStr, "yyyy-mm-dd", null);   
    

None of the above work for me. I also tried to follow the example/best practice discussed in MSDN but the error won't go away. Timezone is not my concern. As a matter of fact, my serialized XML value is not really represented in UTC time ( 2001-07-08T00:00:00 ). I guess it may be why MSDN suggestion won't work!

I finally gave up trying and use Regular Expression to parse the date fields and then convert it to DateTime manually. The following is my solution. I am sure that there is a better way to handle it but I just don't know how and probably I don't understand how the DataTime.Parse works!

   
// Manually parse the string to DateTime

string pattern = @"(?\d{4})-(?\d{2})-(?\d{2})T00:00:00";
System.Text.RegularExpressions.Match m = 
System.Text.RegularExpressions.Regex.Match(query.Element("joinDate").ToString(), pattern);
int year = int.Parse(m.Groups["year"].ToString());
int month = int.Parse(m.Groups["mth"].ToString());
int day = int.Parse(m.Groups["day"].ToString());

// this statement is also exactly how 
// I create the DateTime from the user's input.
DateTime d = new DateTime(year, month, day); 

Read XML from MemoryStream

This illustration of this example follows the previous post. What if we want to generate XML into a stream instead of a physical file? Initially, I thought the solution was very simple and could be done in a minute. Instead, it took me hours to figure out.

    public MemoryStream ToXml() {
      MemoryStream ms = new MemoryStream();
      
      XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
      ns.Add("", "");    

      XmlSerializer ser = new XmlSerializer(typeof(Subscribers));      
      ser.Serialize(ms, new Subscribers(), ns);       
 
      return ms;
    } 

If you read the XML from the MemoryStream returned by the above code with XDocument or XmlDocument, you will encounter "Root element is missing" error. For example,

  using (StreamReader reader = new StreamReader(new Subscribers().ToXml())) {
    subscribers = XDocument.Load(reader);
  }

When the XmlSerializer has finished the "write" into the MemoryStream, the stream pointer will be at the end of the XML structure. Thus, the function should rewind the pointer to the beginning of the stream before returning the MemoryStream to the caller. Using ms.Seek(0, SeekOrigin.Begin); . before the statement return ms; will do the trick.

    public MemoryStream ToXml() {
      MemoryStream ms = new MemoryStream();
      
      XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
      ns.Add("", "");    

      XmlSerializer ser = new XmlSerializer(typeof(Subscribers));      
      ser.Serialize(ms, new Subscribers(), ns);       
      
      ms.Seek(0, SeekOrigin.Begin);  // rewind the pointer the top of the stream

      return ms;
    } 

Serialization: How to override the element Name of an item inside a Collection

We cannot control the display name for the element if the object is an item inside a collection / array / list and etc. We can use XmlAttributeOverrides to override each property name of that object but not the element name of the object itself.

Consider the following scenerio of the XML structure: <customers><customer></customer>...<customer></customer>...</customers>

  <?xml version="1.0" encoding="utf-8"?>  
  <customers>
    <customer id="2600CD00">
      <firstName>Pat</firstName>
      <lastName>Thurston</lastName>
      ...
    </customer>
    <customer id="1E9CC4B0">
      <firstName>Kari</firstName>
      <lastName>Furse</lastName>
      ...
    </customer>
    <customer id="60R120B3">
      <firstName>Carl</firstName>
      <lastName>Stuart</lastName> 
      ...
    </customer>
    ...  
  </customers>

We want to make use of the existing classes to generate the above XML structure.

BeforeAfter
  public class Subscriber {
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string ID { get; set; }
    ...
  }
 [XmlRoot(ElementName = "customer")]
  public class Subscriber {
    [XmlElement(ElementName="firstName")]
    public string FirstName { get; set; }

    [XmlElement(ElementName="lastName")]
    public string LastName { get; set; }

    [XmlAttribute( AttributeName="id" )]
    public string ID { get; set; }
    
    ...
  }
  public class Subscribers 
    : System.ComponentModel.IListSource {

    System.ComponentModel.BindingList<Subscriber> bList;
    ...
    
  }
public class Subscribers 
  : System.ComponentModel.IListSource {

  System.ComponentModel.BindingList<Subscriber> bList;
  ...
  
  public void ToXml(string outputFileName) {   
    StreamWriter w = new StreamWriter(outputFileName); 
    
    XmlRootAttribute root 
     = new XmlRootAttribute("customers");        
      
    XmlSerializerNamespaces ns 
     = new XmlSerializerNamespaces();
    ns.Add("", "");    
    
    XmlSerializer ser 
     = XmlSerializer(bList.GetType(), root)    
    ser.Serialize(w, bList, ns); 
  }    
}    

Now when we call Subscribers to generate XML: new Subscribers().ToXml("customers.xml");, the result is not what we want.

  <?xml version="1.0" encoding="utf-8"?>  
  <customers>
    <Subscriber id="2600CD00">
      <firstName>Pat</firstName>
      <lastName>Thurston</lastName>
      ...
    </Subscriber>
    <Subscriber id="1E9CC4B0">
      <firstName>Kari</firstName>
      <lastName>Furse</lastName>
      ...
    </Subscriber>
    <Subscriber id="60R120B3">
      <firstName>Carl</firstName>
      <lastName>Stuart</lastName> 
      ...
    </Subscriber>
    ...  
  </customers>

If we directly serialize our Subscriber class, for sure, we can alter the element name at the root level:

  <?xml version="1.0" encoding="utf-8"?>    
    <customer id="AE19600F">
      <firstName>Janko</firstName>
      <lastName>Cajhen</lastName>
      ...
    </customer>

Inspired by my another personal project, I fortunately figured out my own solution.

The Solution:

  [XmlRoot("customers")]
  public class Subscribers 
    : System.ComponentModel.IListSource {

    System.ComponentModel.BindingList<Subscriber> bList;
    ...
    
    
    [XmlElement("customer")]
    public List<Subscriber> All {
      get {
        return bList.ToList();
      }
    }
    
    ...
    
    public void ToXml(string outputFileName) {
      StreamWriter w = new StreamWriter(outputFileName); 
        
      XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
      ns.Add("", "");    

      XmlSerializer ser = new XmlSerializer(typeof(Subscribers));      
      ser.Serialize(w, new Subscribers(), ns); 
    }    
  }

Using DataPager in ListView

If the DataSource is not known statically at design time, the DataPager may not work correctly. The following error could be expected to happen when you click on the link provided by DataPager at the second time:

Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request.

This problem occurs because the DataPager has no idea how to perform or calculate paging for you without knowing what page is supposed to display (i.e., StartIndex, and MaximuumRows in the page) when the DataSource is only known at runtime. Thus, you need to provide this missing piece of information to the DataPager before databinding.

Under Google search, you may find that quite a few people implemented the PreRender event of DataPager to perform databinding. Unfortunately, it doesn't work for this scenario. You can bind the data at DataPager's PreRender event but you are unable to supply paging properties to DataPager as mentioned above. Both StartRowIndex and MaximumRows properties are needed to set for paging before databinding. This problem took me a few hours to resolve. It turns out that the solution is very simple.

The Solution: You should add and implement the PagePropertiesChanging event of ListView. The PagePropertiesChangingEventArgs from the event argument will provide all your needy paging properties (StartRowIndex and MaximumRows) so that you can supply them to the DataPager.

    protected void ListView1_PagePropertiesChanging(object sender, PagePropertiesChangingEventArgs e) {     
      this.DataPage1.SetPageProperties(e.StartRowIndex, e.MaximumRows, false);
      BindData();  // set DataSource to ListView and call DataBind() of ListView
    }

If the DataPager is placed inside the ListView, do this:

    protected void ListView1_PagePropertiesChanging(object sender, PagePropertiesChangingEventArgs e) {
      ListView lv = sender as ListView;
      DataPager pager = lv.FindControl("DataPage1") as DataPager;
      pager.SetPageProperties(e.StartRowIndex, e.MaximumRows, false);
      BindData();  // set DataSource to ListView and call DataBind() of ListView
    }

Sunday, April 10, 2011

StructureMap Configuration and Object Creation

Reference: StructureMap - Scoping and Lifecycle Management
My Test version: StructureMap 2.6.1

Configuration for Object Creation

Recently I've used StructureMap in one of my projects and I begin to like it, especially I can control the object life cycle via StructureMap without changing my code. The followings are some ways how to configure StructureMap to create object instance.

  1. Per request basis

    StructureMap by default constructs object instance transiently. Thus, each time you will get a new instance.

        public void NewPerRequest() {
    
          Container mymap = new Container(x => {
            x.For<ISimple>().Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "PerRequest [default]: instances are the same? {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    
  2. Singleton

    StructureMap can inject code for you to apply Singleton design pattern if you want your object remains one and only one instance to be active during the life of the application.

        public void Singleton() {
          Container mymap = new Container(x => {
            x.For<ISimple>().Singleton().Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter =  "Singleton: instances are the same? {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }    
    
  3. HttpContextScoped

    You can configure StructureMap to contruct objects to live in HttpContext scope. This method only uses in Web application.

        public void HttpContextScoped() {
          Container mymap = new Container(x => {
            x.For<ISimple>().HttpContextScoped().Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            ISimple first = mymap.GetInstance<ISimple>();
            ISimple second = mymap.GetInstance<ISimple>();
    
            string formatter = "HttpContextScoped: instances are the same: {0}";
            System.Console.WriteLine(formatter, ReferenceEquals(first, second));
          }
        }
    
  4. HttpContextLifecycle

    Configure StructureMap to contruct objects to live in HttpContext life cycle. This method only uses in Web application.

        public void HttpContextLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpContextLifecycle()).Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            ISimple first = mymap.GetInstance<ISimple>();
            ISimple second = mymap.GetInstance<ISimple>();
    
            string formatter = "HttpContextLifecycle: instances are the same: {0}";
            System.Console.WriteLine(formatter, ReferenceEquals(first, second));
          }
        }
    
  5. HttpSessionLifecycle

    Configure StructureMap to instantiate an object in HttpSession context. Like the previous two methods, it only works in regular Web environment.

        public void HttpSessionLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpSessionLifecycle()).Use<Simple>();
          });
    
    
          #region Create HttpSession environment for test
          // Mock a new HttpContext by using SimpleWorkerRequest
          System.Web.Hosting.SimpleWorkerRequest request =
            new System.Web.Hosting.SimpleWorkerRequest("/", string.Empty, string.Empty, string.Empty, new System.IO.StringWriter());
    
          System.Web.HttpContext.Current = new System.Web.HttpContext(request);
    
          MockHttpSession mySession = new MockHttpSession(
              Guid.NewGuid().ToString(),
              new MockSessionObjectList(),
              new System.Web.HttpStaticObjectsCollection(),
              600,
              true,
              System.Web.HttpCookieMode.AutoDetect,
              System.Web.SessionState.SessionStateMode.Custom,
              false);
    
          System.Web.SessionState.SessionStateUtility.AddHttpSessionStateToContext(
             System.Web.HttpContext.Current, mySession
          );
          #endregion
    
          // now we are ready to test
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "HttpSessionLifecycle: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
    
        }
    
  6. HybridSessionLifecycle

    Those related to HttpContext or HttpSession are only used in ASP.NET Web environment. You won't be able to have your instance to run in client or service environment. HybridSessionLifecycle may be used to resolve it. For this method, HybridSessionLifecycle will try to use HttpContext storage if it exists; otherwise, it uses ThreadLocal storage. However, I found that this method somehow won't work with HttpHandler.

        // According to my experience, 
        // HybridSessionLifecycle works in most situations (including regular Web environment, 
        // Web service, console) but it doesn't work well with HttpHandler while 
        // HybridHttpOrThreadLocalStorage does. 
        public void HybridSessionLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HybridSessionLifecycle()).Use<Simple>();
          });
    
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
      
          string formatter = "HttpSessionLifecycle: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    
  7. HybridHttpOrThreadLocalScoped

    Like HybridSessionLifecycle, it works for most situations. It also work well in a HttpHandler.

       // Unlike HybridSessionLifecycle, it works well with HttpHandlers and WCF Services 
        // besides a regular ASP.NET environment.
        public void HybridHttpOrThreadLocalScoped() {
          Container mymap = new Container(x => {        
            x.For<ISimple>().HybridHttpOrThreadLocalScoped().Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "HybridHttpOrThreadLocalScoped: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    
  8. UniquePerRequestLifecycle

    You can instruct StructureMap to create an unique instance per request in order to ensure no corrupted data and all states in fresh.

        public void UniquePerRequestLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new UniquePerRequestLifecycle()).Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "HttpContext Scope: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    

The Code

  • NUnit Test

    using System;
    using StructureMapTest.Mocks;
    using StructureMap;
    using StructureMap.Pipeline;
    using StructureMap.Configuration.DSL;
    using NUnit.Framework;
    
    namespace StructureMapTest.UnitTest {
      [TestFixture]
      public class ReferenceEqualTest {
    
        private delegate NUnit.Framework.Constraints.SameAsConstraint CompareConstraintDelegate(object expected);
    
        private void Compare<T>(string formatter, CompareConstraintDelegate sameOrNot) where T : ILifecycle {
          Container mymap = new Container(x => {
            T t = Activator.CreateInstance<T>();
            x.For<ISimple>().LifecycleIs(t).Use<Simple>();
          });
    
          Compare(mymap, formatter, sameOrNot);
        }
    
        private void Compare(Container mymap, string formatter, CompareConstraintDelegate sameOrNot) {
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
    
          // Simulate to 2 situations:
          // 1. Assert.That(first, Is.SameAs(second));
          // 2. Assert.That(first, Is.Not.SameAs(second));
          Assert.That(first, sameOrNot(second));
        }
    
        [Test]
        public void Different_on_per_request_basis() {
    
          Container mymap = new Container(x => {
            x.For<ISimple>().Use<Simple>();
          });
    
          Compare(mymap, "PerRequest [default]: instances are the same? {0}", Is.Not.SameAs);
        }
    
        [Test]
        public void Same_on_Singleton_instance() {
          Container mymap = new Container(x => {
            x.For<ISimple>().Singleton().Use<Simple>();
          });
    
          Compare(mymap, "Singleton: instances are the same? {0}", Is.SameAs);
        }
    
        [Test]
        public void Same_on_HttpContextScoped() {
          Container mymap = new Container(x => {
            x.For<ISimple>().HttpContextScoped().Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            Compare(mymap, "HttpContextScoped: instances are the same: {0}", Is.SameAs);
          }
        }
    
        [Test]
        public void Same_on_HttpContextLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpContextLifecycle()).Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            Compare(mymap, "HttpContextLifecycle: instances are the same: {0}", Is.SameAs);
          }
        }
    
        [Test]
        public void Same_on_HttpSessionLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpSessionLifecycle()).Use<Simple>();
          });
    
          #region Create HttpSession environment for test
          // Mock a new HttpContext by using SimpleWorkerRequest
          System.Web.Hosting.SimpleWorkerRequest request =
            new System.Web.Hosting.SimpleWorkerRequest("/", string.Empty, string.Empty, string.Empty, new System.IO.StringWriter());
    
          System.Web.HttpContext.Current = new System.Web.HttpContext(request);
    
          MockHttpSession mySession = new MockHttpSession(
              Guid.NewGuid().ToString(),
              new MockSessionObjectList(),
              new System.Web.HttpStaticObjectsCollection(),
              600,
              true,
              System.Web.HttpCookieMode.AutoDetect,
              System.Web.SessionState.SessionStateMode.Custom,
              false);
    
          System.Web.SessionState.SessionStateUtility.AddHttpSessionStateToContext(
             System.Web.HttpContext.Current, mySession
          );
          #endregion
    
          // now we are ready to test
          Compare(mymap, "HttpSessionLifecycle: instances are the same: {0}", Is.SameAs);
        }
    
        // According to my experience, 
        // HybridSessionLifecycle works in most situations (including regular Web environment, 
        // Web service, console) but it doesn't work well with HttpHandler while 
        // HybridHttpOrThreadLocalStorage does. 
        [Test]
        public void Same_on_HybridSessionLifecycle() {
          Compare<HybridSessionLifecycle>("HybridSessionLifecycle: instances are the same: {0}", Is.SameAs);
        }
    
        // Unlike HybridSessionLifecycle, it works well with HttpHandlers and WCF Services 
        // besides a regular ASP.NET environment.
        [Test]
        public void Same_on_HybridHttpOrThreadLocalScoped() {
          Container mymap = new Container(x => {
            //x.For<ISimple>().HybridHttpOrThreadLocalScoped().Use<Simple>().Named("MyInstanceName");
            x.For<ISimple>().HybridHttpOrThreadLocalScoped().Use<Simple>();
          });
          Compare(mymap, "HybridHttpOrThreadLocalScoped: instances are the same: {0}", Is.SameAs);
        }
    
        [Test]
        public void Different_on_UniquePerRequestLifecycle() {
          Compare<UniquePerRequestLifecycle>("HttpContext Scope: instances are the same: {0}", Is.Not.SameAs);
        }
    
      }
    }
    
  • class Simple

    using System;
    
    namespace StructureMapTest.Mocks {
      public interface ISimple {
        void DoSomething();
      }
    
      public class Simple : ISimple {
        public void DoSomething() {
        }
      }
    }
    
  • class MockHttpContext

    using System;
    
    namespace StructureMapTest.Mocks {
      public class MockHttpContext : IDisposable {
        private readonly System.IO.StringWriter sw;
    
        public MockHttpContext() {
          var httpRequest = new System.Web.HttpRequest("notExisted.aspx", "http://localhost", string.Empty);
          sw = new System.IO.StringWriter();
          var httpResponse = new System.Web.HttpResponse(sw);
          System.Web.HttpContext.Current = new System.Web.HttpContext(httpRequest, httpResponse);
        }
    
        public void Dispose() {
          sw.Dispose();
          System.Web.HttpContext.Current = null;
        }
      }
    }
    
  • class MockHttpSession and class MockSessionObjectList

    using System;
    
    namespace StructureMapTest.Mocks {
      // This class follows the example at:
      // http://msdn.microsoft.com/en-us/library/system.web.sessionstate.ihttpsessionstate.aspx
      public sealed class MockHttpSession : System.Web.SessionState.IHttpSessionState {
        ...
      } 
    } 
    
    using System;
    
    namespace StructureMapTest.Mocks {
      // This class follows the example at 
      // http://msdn.microsoft.com/en-us/library/system.web.sessionstate.isessionstateitemcollection(v=VS.90).aspx
      public class MockSessionObjectList : System.Web.SessionState.ISessionStateItemCollection {
        ...
      } 
    }  
    

Thursday, February 24, 2011

Why validators error messages not displayed in Google Chrome?

I asked myself several times. Why are validators error messages not displayed in Google Chrome? It is weird. Not a single validation error message is displayed in the page as soon as the following CSS is added. It works well with IE, Safari and Mozilla-type browsers but fails on Google Chrome.

  div.sink {padding:20px; width: 190px;border:1px silver solid;}
  div.sink input[type="text"]{width: 190px;}
  div.sink input[type="submit"]{width: 50px;}

I am sure that all the validators are fired and the page as a result reports invalid but there is no error message. There is no doubt that the problem is somehow related to the above CSS. To reduce the complexity of the page and easily to nail down the problem, I marked up a simple login page for investigation.

login page

With a bit modification, the page looks like this - very simple, only text boxes, RequiredFieldValidators and a login button. There is no code behind (no OnClick event) and they are straightly marks up.

<head>
<style type="text/css">

div.login {padding:20px; width: 190px; border:1px silver solid;}
div.login input[type="text"], div.login input[type="password"]{ width: 190px; }
div.login input[type="submit"]{width: 50px;}

</style>
</head>
<body>
<form id="form1" runat="server">
<div class="login">

  User Name:<br />
  <asp:TextBox id="txtUsername" runat="server" MaxLength="30"></asp:TextBox><br />     
  <asp:RequiredFieldValidator ID="RequiredFieldValidatorUsername" runat="server" 
       ControlToValidate="txtUsername" Display="Dynamic" 
       ErrorMessage="A username is required."></asp:RequiredFieldValidator><br />

  Password:<br />
  <asp:TextBox id="txtPassword" runat="server"  TextMode="Password" MaxLength="180" ></asp:TextBox><br />     
  <asp:RequiredFieldValidator ID="RequiredFieldValidatorPassword" runat="server" 
       ControlToValidate="txtPassword" Display="Dynamic" 
       ErrorMessage="A password is required."></asp:RequiredFieldValidator><br />

 <asp:Button ID="btnLogin" runat="server" Text="Log In" /><br />

</div>
</form>
</body>

With the above markup, the error message is shown at least for the User Name, unlike my problem page. The funny is that in this simple example, only the last validator won't be able to display error message if I keep adding more the similar control and validator. For sure, everything works normal if the CSS is removed.

No error message for the only and one validator

no error message for the only and one validator
No error message for the last validator
screen 1

no error message for the last validator - screen 1
No error message for the last validator
screen 2

no error message for the last validator - screen 2
No error message for the last validator
screen 3

no error message for the last validator - screen 3

The CSS is not complex. As a matter of face, it is very simple. What it does is to set the width for the input box or button in the page. To resolve this, I can simply add the width back to the control inself by removing the CSS. However, I cannot do it with my original page. Most controls are from user controls which won't allow me to set width. In addition, If I manage to add the width to them, the changes will affect other pages across the entire project unless I do it programically instead of static. It means I have to go in every single user control to add some code so that it can accept width change. I don't think that I want to go this route.

After playing around for some time, I finally found out the problem. The problem is the width setting in the first line.

  div.login {padding:20px; width: 190px; border:1px silver solid;}
  div.login input[type="text"], div.login input[type="password"]{ width: 190px; }
  div.login input[type="submit"]{width: 50px;}

Adding 4 more pixels fixes the problem!

  div.login {padding:20px; width: 194px; border:1px silver solid;}
  div.login input[type="text"], div.login input[type="password"]{ width: 190px; }
  div.login input[type="submit"]{width: 50px;}

It appears to me that Google Chrome browser cares about the outer width of the DIV when a CSS is specified. Most browsers may ignore it if the outer DIV is not width enough for its child components. From this instance, I learn that I need to pay attention to declare my CSS in order to prevent this from happening.

Saturday, February 12, 2011

Resource interpreted as image but transferred with MIME type text/html

One of my pages failed on Google Chrome. Because of the subject error, I could not debug it with Chrome debugger. The script won't be loaded into the debugger if the page is interpreted as an image. I wonder why and what caused it and I could not ignore it.

After hours looking into the code, I finally found the problem. In my case, I forget to include a JavaScript library reference in one of my user controls. As soon as I added it back, the error is gone and the page can be loaded into the Chrome debugger as other normal pages. Everything works fine.

Although the error message from Chrome is confusing, it at least alerts me that there is something wrong in my page. I am glad I found the problem and fixed it.

Why page events get called twice?

There are a lot of discussions about why the Page events get called twice or even multiple times. Most the solutions from Google search suggested to set AutoEventWireup to false. By default, it is true. Is it really helpful? For sure, a lot of people will disagree. Me either. The solution of AutoEventWireup doesn't stand for all cases. To me, most of the time, this issue is related to image resource.

Consider the following HTML code in the ASP.NET page markup.

<img src="" id="img1" alt="an image" style="display:none" />   cross

The HTML is valid but it is the trouble maker in ASP.NET Web form, which causes the page events being called twice because the src attribute is empty. Removing the src attribute entirely will fix the problem.

Let's consider another image related scenario. What if you want to set a background color in a table cell but mistakenly put background attribute instead of background-color?

<TD background="#008080">   cross

In this case, page events are also being called twice. But if HTML color name is used, everything will work fine. Thus the following HTML code won't cause trouble although it is not correct.

<TD background="Teal">    tickquestion mark

I am not quite sure how or when CalliHelper.EventArgFunctionCaller will bind the above scenarios to the page events. None of them are declared to run as server. But I do know that if the page events are being called twice, the first thing I will check is the markup, especially those HTML codes related to image resource.

Using From Target Attribute

[ Scenario ] [ The Solution ] [ Implemented with JavaScript ] [ Implemented with JQuery ] [ The Code ]

Problem Statement

A page may have several links and buttons to produce different forms of outout. Every single piece of data defined in the page including the input supplied by the user is needed for postback process. Is there a simple way to redirect the output in another window without overriding the current page so that the current page content stays perisistent?

Solution 1: Can we use target attribute of the element?

For a hyperlink, you may think using target attribute to accomplish this. With this approach, you have to gather all the needy input yourself from the current page into the querystring of the URL before posting data back to the server. This will introduce you another problem: querystring size limitation. Different plaforms and browser types have different limitations. Least to say, there is some work to be done in the client side. In addition, your server page must be configured and coded to handle HTTP GET to process the request.

For input button, we are out of luck. There isn't a target attribute defined in the specification. Thus, the action is completely relying on the form action.

Solution 2: How about AJAX?

Another approach to resolve this is using partial page rendering that would provide better user experience by eliminating the full page round trip overhead. Does it really matter in our case if we need every single piece of data defined or entered by the user in that page? It may save some overhead on some resources being loaded. Other than that, every data specified in that page is still posted back to the server for process. What we want is the output in another window so that the user may continue to use the same data to generate another form of output in a separate window and so on. Indeed, using AJAX is a bit complicated here. Of course, you may use some library like JQuery to set up your AJAX calls. For ASP.NET, you can simply accomplish this by using ScriptManager and UpdatePanel. Unfortunately, all of them require you to re-architect your page to accommodate the changes. To me, it is too much work. I need something simpler and make changes as least as possible in the page.

The Ultimate and Simple Solution: by using target attribute of the form.

The solution is to use the old trick defined in HTML with a little Javascript assistance.

In order to post data back to the server, we need a form tag. We usually don't specify the target attribute. In this case, the data is returned to the current page by default. If a target attribute is specified in the form tag, the output will be automatically routed to the specific frame or window by the browser when the data is returned by the server. For example,

Route data to a new window:

<form name="form1" id="form1" target="_blank" method="post" action="Default.aspx" onsubmit="..." >
...
</form>
</pre>

Route data to an iframe:

<form name="form1" id="form1" target="myIframe" method="post" action="Default.aspx" onsubmit="...">
...
</form>
<iframe name="myIframe" id="myIframe" width="400" height="300"></iframe>

Note that the name attribute of the iframe is required in this case. By specification, we should specify the_name_of_the_frame_or_window to the target attribute, not the_id_of_the_frame_or_window. But the id works for some Webkit type of browsers like Google Chrome.

With this simple solution in mind, I finally come up a way to unpuzzle the above problem with a very minor change in the page. The following is the solution presented in ASP.NET. The technique can be applied to elsewhere such as a simple HTML/CGI program.

What I need to do is dynamically adding a target attribute to the form before the data is being posted back to the server.

JavaScript Solution

First, let's define the script which does the injection. The JavaScript function may look like the following.

  function changeFormTarget() {
    var formObj = document.getElementById('form1');
    formObj.setAttribute('target', '_blank');
  }

What the above code does is, before posting data back to the server for process, it injects the target attribute into the form element by producing the following HTML code, which instructs the browser where to output the next document.

<form name="form1" id="form1" target="_blank" method="post" action="Default.aspx" onsubmit="...">

Then, we hook this function to the onclick event of the element before form submission. In ASP.NET, simply add OnClientClick event on the control to instruct the ASP.NET engine to process the client script first before postback.

<asp:LinkButton ID="lnkRptToNewTarget" runat="server" OnClientClick="changeFormTarget()" OnClick="RptToNewTarget_Click">Generate Report to New Target</asp:LinkButton><br />
<asp:Button ID="btnRptToNewTarget" runat="server" OnClientClick="changeFormTarget()" OnClick="RptToNewTarget_Click" Text="Generate Report to New Target"  />

The above markup will result in the following HTML code:

<a onclick="changeFormTarget();" id="lnkRptToNewTarget" 
   href="javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("lnkRptToNewTarget", "", true, "", "", false, true))">Generate Report to New Target</a><br />
<input 
   type="submit" name="btnRptToNewTarget" id="btnRptToNewTarget"
   value="Generate Report to New Target" 
   onclick="changeFormTarget();WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("btnRptToNewTarget", "", true, "", "", false, false))" />

As you can see, with the very minimal changes in the page, a new window will be spawned off from the current page when the result is back.

JQuery Solution

If you're using JQuery, the solution is even simpler without changing any element in the page. Simply put the following script and then the onclick event will be automatically wired to the appropriate elements. In my example, simply ask JQuery to scan my elements and then wire them up with my supplied onclick event.

  $('#lnkRptToNewTarget,#btnRptToNewTarget').click(function() {
    $('form').attr('target', '_blank');
  });

With JQuery, basically the page remains intact without any markup or element being changed. The result is the same as JavaScript approach. Of course, you can write your own auto event wire up to achieve what JQuery does but there will be a lot of work.

Sometimes a simple solution works like a charm and it also eliminates the time to re-test and shortens the development time. I hope you will find this piece of information somehow useful.

The Code

If you want to test it yourself and see how this works, here is my simple backend event handler for output. In real life, the handler could be in another process which generates PDF or other non-HTML types of documents, and then call Response.Redirect() or Response.TransmitFile() to return the document to the client.

  protected void RptToNewTarget_Click(object sender, EventArgs e) {
    Response.Write(
      string.Format("Your report <b>{0}</b> is generated.", this.txtReportName.Text));
    Response.End();
  }

Here is the markup for JQuery approach. You can simply alter it to JavaScript solution that I discussed above.

<form id="form1" runat="server" defaultfocus="txtReportName">
<div>
  Enter Report Name: <br />
  <asp:TextBox ID="txtReportName" runat="server"></asp:TextBox>
  <asp:RequiredFieldValidator ID="RequiredFieldValidatorTxtReportName" runat="server" 
       ErrorMessage="Please enter the report name." Display="Dynamic" ControlToValidate="txtReportName"></asp:RequiredFieldValidator><br />
  <asp:LinkButton ID="lnkRptToNewTarget" runat="server" OnClick="RptToNewTarget_Click">Generate Report to New Target</asp:LinkButton><br />
  <asp:Button ID="btnRptToNewTarget" runat="server" OnClick="RptToNewTarget_Click" Text="Generate Report to New Target"  />
</div>
</form>

<script type="text/javascript">
  $('#lnkRptToNewTarget,#btnRptToNewTarget').click(function() {
    $('form').attr('target', '_blank');
  });
</script>