Thursday, June 30, 2011

How to change the Gateway Metric on Windows 7

If there are multiple physical adapters present in the network, Windows 7 will always look at and compare the indices of gateway metric among the physical adapters and then pick the one with the lowest index to use. In order to override the default settings, you need to adjust the index of gateway metric for each physical adapter. The adapter assigned with the lowest index will always take precedence and used by Windows 7 automatically. For instance, if you want to use the wired connection when both wired and wireless are available in your network. You need to assign the lower gateway metric index to your LAN card so that you can ensure that your favorite adapter will be used by Windows 7 whenever it is available. In my previous post, I mentioned how to use route change command to adjust gateway metric index. In this post, I will present you an easy way to do it without going to the command prompt.

  • Open Network Connection from Network and Sharing Center. Or type ncpa.cpl in the search box at your Windows Explorer or Start menu.
  • Select your favorite connection, e.g., Local Area Connection and then right click to select Properties.
  • In the Networking tab, select the Internet protocol version, e.g., Internet Protocol Version 4 (TCP/IPv4).
  • And then click Properties button.
  • In protocol Properties dialog box, click Advanced....
  • Inside the Advanced TCP/IP Settings, at the IP Settings tab, click Add... of the Default gateways.
  • Uncheck the Automatic metic checkbox, and then enter your router IP (e.g. 192.168.1.1) and assign your metric index. Click Add to insert the entry.

    Default Gateway Settings
  • Click all OK's to exit.

The changes will immediately take effect. If you check your route table after changes (using route print command), you'll find a new entry in the Persistent Routes.

===========================================================================
Persistent Routes:
  Network Address          Netmask  Gateway Address  Metric
          0.0.0.0          0.0.0.0      192.168.1.1      20
===========================================================================

Regardless of the IP address obtained automatically or statically, changing gateway metric can be done via either route change command or network connection GUI (ncpa.cpl).

When you use route print to verify your settings, the metric indices usually will double the number you enter.

Use netsh int ip show config will show the exact settings you will find in the network connection properties.

I hope you'll find this information useful to you.

Wednesday, June 29, 2011

Forcing Windows 7 to use wired when available

For unknown reason, Windows 7 prefers using wireless connection over wired. In order to force Windows 7 to use the wired connection when available, you need to adjust the setting of gateway metric among network adapters. A lot of posts found online recommended to do it via via Network Connection (ncpa.cpl) by unchecking Automatic metric checkbox and manually setting Interface metric on each network adapter. Unfortunately, this only updates interface metric, not gateway metric. Thus, it won't have any effect on Windows 7 and the problem persists.

You can type the following command at the command prompt for the detail of your network adapter settings [see example]:

netsh int ip show config

Or type the following for the settings of the route table. In this command, the metric column is only for gateway metric. [see example]

route print

To change gateway metric, there are two options. First is to use route change command at the command prompt. For example,

route change 0.0.0.0 mask 0.0.0.0 192.168.1.1 metric 20 if 13

where

  • 0.0.0.0 is the target network destination (IP address) found in route table.

  • mask 0.0.0.0 is the subnet mask associated with the target network destination.

  • 192.168.1.1 is the IP address of the gateway, my router.

  • metric 20 is setting gateway metric to 20. The network interface with a lower metric takes precedence. In this example, I am assigning 20 to my wired network card, which receives the lowest gateway metric. It enforces Windows 7 to use the wired whenever it is available. Also see KB299540.

  • if 13 means to apply the changes only to the network interface index equivalent to 13. In this example, 13 is my Intel(R) 82577LM Gigabit Network Connection that can be found in the Interface List section of route print.

There is no need to log out or reboot. The changes should take effect immediately.

The second option presented in my next post may be the preferable way, especially if you don't want to execute any command. Go and see my next how-to.

References:
The meaning of metric numbers, see KB299540.
How to use Route Command.

Example of netsh int ip show config

Configuration for interface "Wireless Network Connection"
    DHCP enabled:                         Yes
    IP Address:                           192.168.1.2
    Subnet Prefix:                        192.168.1.0/24 (mask 255.255.255.0)
    Default Gateway:                      192.168.1.1
Gateway Metric: 25 InterfaceMetric: 50
DNS servers configured through DHCP: 192.168.1.1 192.168.1.1 ... ... Register with which suffix: Primary only WINS servers configured through DHCP: None Configuration for interface "Local Area Connection" DHCP enabled: No IP Address: 192.168.1.200 Subnet Prefix: 192.168.1.0/24 (mask 255.255.255.0) Default Gateway: 192.168.1.1
Gateway Metric: 256 InterfaceMetric: 20
Statically Configured DNS Servers: 192.168.1.1 ... Register with which suffix: Primary only Statically Configured WINS Servers: None ...

Example of route print

The column of Metric in the section of IPv4 Route Table is gateway metric.

===========================================================================
Interface List
 13...5c 26 0a 23 40 d5 ......Intel(R) 82577LM Gigabit Network Connection
 14...00 24 d7 6c a6 fc ......Intel(R) Centrino(R) Ultimate-N 6300 AGN
 15...00 24 d7 6c a6 fd ......Microsoft Virtual WiFi Miniport Adapter
 10...5c ac 4c fd 7b 5e ......Bluetooth Device (Personal Area Network)
 16...00 50 56 c0 00 01 ......VMware Virtual Ethernet Adapter for VMnet1
 17...00 50 56 c0 00 08 ......VMware Virtual Ethernet Adapter for VMnet8
  1...........................Software Loopback Interface 1
===========================================================================

IPv4 Route Table
===========================================================================
Active Routes:
Network Destination        Netmask          Gateway       Interface  Metric
0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.200 266 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.2 50
127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.200 266 192.168.1.0 255.255.255.0 On-link 192.168.1.2 281 192.168.1.2 255.255.255.255 On-link 192.168.1.2 281 ... ... ... ... ...

Monday, June 20, 2011

How to attach CKEditor keyup event

My current version: CKEditor 3.6

  editor.document.on('keyup', function(evt){...});

I am trying to attach a keyup event to CKEditor. Unfortunately, it doesn't fire as soon as the setData() is being called.

It turns out that the editor.document is recreated every time setData() is called and thus all the events attached to the editor.document will be gone forever. Re-attaching the event to the editor.document is necessary. Luckily, setData() supports callback.

  editor.setData('set some data', function(evt) {
    editor.document.on('keyup', myKeyupEvent);
  });

I don't understand why CKEditor won't support keyup event in editor.on. You can't do this:

err editor.on('keyup', function(evt){...});

But for keypress or keydown event, you can do this:

  editor.on('key', function(evt){...});

There isn't much details about the CKEditor event. Their API isn't helpful either. Reading through the event API, I still have no idea when I should be able to use editor.on to attach events and when I should use editor.document.on.

What I understand is that editor.on is good for custom events. You write your own event, hook it up and then fire it up yourself as needed. The events registered by editor.on will not be removed after setData() has been called. It means that there is no need to re-attach the event even though the editor.document is re-created.

For events registered by editor.document.on, they support all existing defined events as long as you provide a event handler for them. The drawback is that the event handler requires registration again after editor.document has been recreated.

Currently, I have no custom events to be defined; thus, I found editor.document.on more useful than editor.on. I can hook as many events I want without restrictions.

Wednesday, June 8, 2011

Posting an Ad to Craigslist now becomes more difficult or frustrated

I haven't posted and purchased anything from or to Craigslist for a very long time. On my mind, Craigslist was a great service for local people to get good deals. Unfortunately, a lot of innocent people got spammed; sadly stories happened one after another. A few years ago Craigslist started implementing posting verification by forcing people who post classified ads to the USA or Canada for phone verification. And each successful verified phone number is good for 90 days. To consumers, it is a good thing; it certainly eliminates a lot of spammers, especially the ones from oversea. However, this control along with others makes a lot legitimate posters (like marketers) extremely inconvenience. A lot of them look for and turn to other alternatives such as backpage.com, ebayclassifieds.com, and etc. Sadly the alternatives traffic is still not as much as Craigslist.

I am not a marketer but sometimes I may have things for sale or giving away, like everyone else. The first service pops to my mind is Craigslist. I was trying to post an ad since yesterday morning without success. None of my phones could be verified.

First, I was trying to use magic jack phone number for verification. It turns out that Craigslist won't accept any VoIP phone. It means I cannot use Magic Jack for verification. Then I entered my cell phone number. Unfortunately, the page kept sayingYou are submitting telephone verification requests too rapidly. To prevent abuse, we require users to wait 5 minutes between requests, use no more than 3 telephone numbers in a 12 hour period, and not use any single phone number more than 3 times in a 12 hour period. Please wait and try again later.” At first, I believed in the error message. But after having waited for a few hours and then tried again, I concluded that something must have gone wrong. Reporting an error to Craigslist is no use. I received the same explanation that I could find in their phone authentication help page. What could I do? So I put this aside for the day and tried again in this morning. The same error message re-appeared. It was more than 12 hours waiting period. I cannot believe that I still received the same error. It implies that their phone verification service is not reliable. It could be why it frustrates a lot of marketers, besides other issues.

Craigslist phone verification process is even more difficult to follow than using a ATM card at an ATM machine when you're trying to remember your passcord. For ATM, at least I could try 3 times physically. For Craigslist phone verification service, the 3 times attempts are very easily to exceed without your knowledge.

You cannot refresh the page or use the same URL to another browser window/tab. If you do, it will count once because the page itself will remember your previous phone number. I am not talking about the browser cache. It is nothing related to it. Your URL is tied to your email address. When you enter a phone number, it will automatically tie it to that phone number as well. It is done in the craigslist backend server, not from the browser. Thus, if you have an issue on this, don't refresh, or cut-and-paste the URL to another browser to try again. You will quickly lose your two counts. Unfortunately, my issue is not related to this.

In order to resolve this issue, I decide to sign up an account with Craigslist. From there, it automatically detected a problem saying “There is already an account associated with this email address.” and offered me a chance to reset my password. I have never signed up an account with Craigslist, so I guess it could be a temporary account for non-registered user when I tried to post an ad. Obviously such a temporary account is required for their phone verification service. I followed the link to request password reset. Then I received an instant email for password reset. I finally have my password reset and registered an account with Craigslist.

Having an account with Craigslist doesn't mean I exempt for phone verification when I post an classified ad. To me, this password reset or the registration process cleared all errors including every single field of my phone number stored in their back-end server. From that onward, I could re-enter my cell phone number for verification to the page. Unfortunately, my request was expecting a text message / SMS from Craigslist but somehow Craigslist failed to deliver. When I went to check the sending code status, it indicated that “the call has been placed.” Call? I was requesting a text message, not a call! So I waited for another 5 or more minutes and then placed a request for a voice message instead of text. My cell phone rang as soon as I hit the "send the code!" button. My ad was finally posted.

This whole verification process with Craigslist negates my impression with them. I hope that Craigslist will improve this process. When the alternatives begin improving their traffic, people may leave Craigslist if this verification process is still a headache. Right now, Craigslist still holds a solid position in the market place.

Monday, June 6, 2011

Passed 70-515

As of today, I am certified for ASP.NET 4.0.

Learning from my previous experience, yesterday I drove to the test center and checked it out first. I don't understand why the same test center keeps moving from place to place! Every time I take an exam with them, they move. This time I ensure I know how to get there so everything was smooth. As usual, I was there an hour ahead and waiting outside for the time came.

There are 51 questions in total for 2.5 hours. I finished them about 1.75 hours. I am happy with my score. I could miss 1 or 2 questions. That's. My score is 970 out of 1000. The passing mark for this test is still at 700.

For some reason, I feel 4.0 is a lot easier than 3.5. But I am surprised to be tested by some jQuery questions. There were at least 4 questions in my test. Frankly, I don't find them related to ASP.NET. To me, it doesn't make any sense to have them in the test. jQuery is not mentioned in MS testing objective. Why am I being tested with jQuery syntax? Luckily I use jQuery and am able to answer them. Otherwise, it could be doomed. Besides MVC 2 and dynamic data stuff, I don't see any new stuff in the test. If you know 3.5 (including SP1) well and put some effort to learn more about MVC 2 and dynamic data, for sure you will be ready for the test.

Good luck to everyone who is preparing or going to take this test!

Thursday, May 19, 2011

WCF and Interface

Sometime ago, I had a few WCF services without using interface. As time goes by, some functionality need to expand. To avoid exposing all the service functionality to a single endpoint, implementing multiple interfaces is necessary so that multiple endpoints can be configured based on interface. Unfortunately, with this change, it broke every client call (JavaScript) in the old code.

In the code, the JavaScript function calls were all used and based on the library stub automatically generated by ASP.NET where the page uses ScriptManager to manage MS AJAX library. Thus the change at the contract name specified in the Web.config will automatically refresh the changes into the library.

Previously, there was no interface involved. The contract attribute of the endpoint defined in Web.config is directly pointing to the name of service itself. For example,

 <system.serviceModel> 
  
  <behaviors>
    <endpointBehaviors>
      <behavior name="Shipment.Order.WebAspNetAjaxBehavior">
      <enableWebScript />
  </behavior>
    ....
  <service name="Shipment.Order">  
     <endpoint address="" 
           behaviorConfiguration="Shipment.Order.WebAspNetAjaxBehavior"
           binding="webHttpBinding" 
           contract="Shipment.Order" />
  </service>  
  ...  
</system.serviceModel>  

Now, with the interface; the declaration of the endpoint contract is the interface instead of the service itself.

  <service name="Shipment.Order">  
     <endpoint address="" 
           behaviorConfiguration="Shipment.Order.WebAspNetAjaxBehavior"
           binding="webHttpBinding" 
           contract="Shipment.IOrder" />
  </service>  

In the page, the original JavaScript would call the service operation like this:

    Shipment.Order.Confirm(myId);

In order to work with the interface approach, the JavaScript has to use the interface to make a call:

    Shipment.IOrder.Confirm(myId);

Since the service has been placed into service sometime ago, the change to use interface is not a good idea. It affects a lot of pages that use this service. Instead of making the existing service to implement an interface, I derived a subclass from it so that the subclass inherits whatever its parent has. Then I simply configure an endpoint for this subclass instead of configuring multiple endpoints for the existing service. As a result, it works like a charm!

If you are having a problem with the interface or keep getting JavaScript error such as “xxx is NULL or not an Object” or “Cannot call method 'xxx' of undefined”. You would better look at the method you use in the JavaScript. It must be changed to use the interface because the JavaScript stub generated by ASP.NET is based on the contract defined in the Web.config. You could change the namespace by declaring [ServiceContract(Namespace = "xxx")] at your interface/class but your contract name won't be changed. You can examine the JavaScript stub by appending "/js" to the end of the service URL to compare the differences.

Wednesday, May 11, 2011

Alias created by cliconfg.exe (32-bit / 64-bit) don't work on 64-bit Windows 7

cliconfg.exe can be found in two locations at a 64-bit system when sql client tools are installed:
- [32-bit] C:\Windows\System32\cliconfg.exe
- [64-bit] C:\Windows\SysWOW64\cliconfg.exe

I wonder if anyone experience this.

Recently, I have moved my development machine to 64-bit windows 7 but I still have one project database running on 32-bit XP; the DB server is SQL 2005.

At my development box, I have SQL 2008 installed. I tried to use both 32-bit and 64-bit cliconfg.exe to create an alias but I cannot connect to that DB server on XP. In the SQL Server Configuration Manager, the alias created by both 32-bit and 64-bit cliconfig.exe are only shown up at "SQL Native Client 10.0 Configuration (32bit)" category.

[Updated on May 15, 2011] As a matter of fact, the entry is correct because the SQL2005 is running on 32-bit XP. Thus, the entry only shows up at 32-bit category. Indeed, this entry can only be used in .NET connection string for connection. Unfortunately, it cannot be used by SQLCMD (by default found at "C:\Program Files\Microsoft SQL Server\100\Tools\Binn") for remote connection. It was what I did last time. It failed.

If I use the non-32-bit client configuration in SQL Server Configuration Manager to create an alias, then the remote connection immediately works.

[Updated on May 15, 2011] Last time, I didn't check with .NET application. I only used SQLCMD -S <alias> for connection. Indeed, this entry only works for SQLCMD but fails for .NET remote connection.

In the registry, the one created by both cliconfg.exe will have an entry at

   HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\MSSQLServer\Client\ConnectTo

The one created by non-32-bit client configuration in SQL Server Configuration Manager will instead have an entry at the following registry:

   HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQLServer\Client\ConnectTo

Currently I don't have time to investigate it. But I just wonder why the alias created by either 32-bit or 64-bit cliconfg.exe utility won't work. I thought each version of cliconfg.exe maintains a separate list of alias in the registry. Why are both currently pointing to the same registry?

[Updated: May 15, 2011] The best reason to explain why both two 32-bit and 64-bit cliconfg.exe utilities create the same alias & registry entries is because the SQL server running on XP is 32-bit. The alias created by the non-32-bit client configuration in SQL Server Configuration Manager won't work for .NET but SQLCMD. Only the entry shown up at "SQL Native Client 10.0 Configuration (32bit)" category works well in .NET ConnectionString. Again, the main reason of my case is that the SQL server is running on 32-bit.

Thursday, April 14, 2011

URL Routing Causing to Load Login.aspx and SyntaxError

It is very strange. After the URL Routing module had been added into the non-MVC Web application, the application kept trying to load Login.aspx into the pages that don't need authentication and cause other syntax errors.

Uncaught SyntaxError: Unexpected token <
Register.aspx:63Uncaught Error: ASP.NET Ajax client-side framework failed to load.
Login.aspx:3Uncaught SyntaxError: Unexpected token <
jsdebug:1Uncaught ReferenceError: Type is not defined
Uncaught SyntaxError: Unexpected token <
Login.aspx:187Uncaught ReferenceError: WebForm_AutoFocus is not defined

After a few hours searching in code, I realized that the problem was caused by WebResource.axd. All the scrips for WebForm validation, focus, post back and etc were replaced by Login.aspx. WebResource.axd were instead trying to load Login.aspx.

Another interesting is that I duplicated some of user controls and pages to another temporary project for investigation but I am unable to re-produce the same behavior. With the same routing table, the same Web.config and a scale-down Global.ascx, Login.aspx won't be loaded when it is not being asked for. All JavaScripts are loaded as expected by WebResource.axd without problems. I still wonder what settings or events in the original application cause this problem.

Since this problem is caused by routing, one way to fix is to bypass the route. Adding RouteTable.Routes.RouteExistingFiles = true; won't fix the problem. Instead, a specific rule for ignoring a route is needed.


// For .NET 4.0
//RouteTable.Routes.Ignore("{resource}.axd/{*pathInfo}");  

// For .NET 3.5 SP1
RouteTable.Routes.Add(new Route("{resource}.axd/{*pathInfo}", new StopRoutingHandler()));

Tuesday, April 12, 2011

DateTime.Parse problem

Some of DateTime are taken from the user input and they are instantiated by the simaple DateTime constructor new DataTime(year, month, day);. When they are serialized into XML, the string will become like this:

   2001-07-08T00:00:00   

I tried to parse it back to DateTime from a XML value string. Unfortunately, I tried every method I know but kept getting String was not recognized as a valid DateTime error. For example,

  
    // Every method here causes String was not recognized as a valid DateTime error.  
  
    DateTime d = DateTime.Parse(query.Element("joinDate").ToString());
    
    string dateStr = query.Element("joinDate").ToString().Replace("T", "");
    DateTime d = XmlConvert.ToDateTime(dateStr, "yyyy-mm-dd hh:mm:ss");
    
    string dateStr = query.Element("joinDate").ToString().Replace("T", "");
    DateTime d = DateTime.ParseExact(dateStr, "yyyy-mm-dd hh:mm:ss", null);  
    
    string dateStr = query.Element("joinDate").ToString().Replace("T00:00:00", "");
    DateTime d = DateTime.ParseExact(dateStr, "yyyy-mm-dd", null);   
    

None of the above work for me. I also tried to follow the example/best practice discussed in MSDN but the error won't go away. Timezone is not my concern. As a matter of fact, my serialized XML value is not really represented in UTC time ( 2001-07-08T00:00:00 ). I guess it may be why MSDN suggestion won't work!

I finally gave up trying and use Regular Expression to parse the date fields and then convert it to DateTime manually. The following is my solution. I am sure that there is a better way to handle it but I just don't know how and probably I don't understand how the DataTime.Parse works!

   
// Manually parse the string to DateTime

string pattern = @"(?\d{4})-(?\d{2})-(?\d{2})T00:00:00";
System.Text.RegularExpressions.Match m = 
System.Text.RegularExpressions.Regex.Match(query.Element("joinDate").ToString(), pattern);
int year = int.Parse(m.Groups["year"].ToString());
int month = int.Parse(m.Groups["mth"].ToString());
int day = int.Parse(m.Groups["day"].ToString());

// this statement is also exactly how 
// I create the DateTime from the user's input.
DateTime d = new DateTime(year, month, day); 

Read XML from MemoryStream

This illustration of this example follows the previous post. What if we want to generate XML into a stream instead of a physical file? Initially, I thought the solution was very simple and could be done in a minute. Instead, it took me hours to figure out.

    public MemoryStream ToXml() {
      MemoryStream ms = new MemoryStream();
      
      XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
      ns.Add("", "");    

      XmlSerializer ser = new XmlSerializer(typeof(Subscribers));      
      ser.Serialize(ms, new Subscribers(), ns);       
 
      return ms;
    } 

If you read the XML from the MemoryStream returned by the above code with XDocument or XmlDocument, you will encounter "Root element is missing" error. For example,

  using (StreamReader reader = new StreamReader(new Subscribers().ToXml())) {
    subscribers = XDocument.Load(reader);
  }

When the XmlSerializer has finished the "write" into the MemoryStream, the stream pointer will be at the end of the XML structure. Thus, the function should rewind the pointer to the beginning of the stream before returning the MemoryStream to the caller. Using ms.Seek(0, SeekOrigin.Begin); . before the statement return ms; will do the trick.

    public MemoryStream ToXml() {
      MemoryStream ms = new MemoryStream();
      
      XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
      ns.Add("", "");    

      XmlSerializer ser = new XmlSerializer(typeof(Subscribers));      
      ser.Serialize(ms, new Subscribers(), ns);       
      
      ms.Seek(0, SeekOrigin.Begin);  // rewind the pointer the top of the stream

      return ms;
    } 

Serialization: How to override the element Name of an item inside a Collection

We cannot control the display name for the element if the object is an item inside a collection / array / list and etc. We can use XmlAttributeOverrides to override each property name of that object but not the element name of the object itself.

Consider the following scenerio of the XML structure: <customers><customer></customer>...<customer></customer>...</customers>

  <?xml version="1.0" encoding="utf-8"?>  
  <customers>
    <customer id="2600CD00">
      <firstName>Pat</firstName>
      <lastName>Thurston</lastName>
      ...
    </customer>
    <customer id="1E9CC4B0">
      <firstName>Kari</firstName>
      <lastName>Furse</lastName>
      ...
    </customer>
    <customer id="60R120B3">
      <firstName>Carl</firstName>
      <lastName>Stuart</lastName> 
      ...
    </customer>
    ...  
  </customers>

We want to make use of the existing classes to generate the above XML structure.

BeforeAfter
  public class Subscriber {
    public string FirstName { get; set; }
    public string LastName { get; set; }
    public string ID { get; set; }
    ...
  }
 [XmlRoot(ElementName = "customer")]
  public class Subscriber {
    [XmlElement(ElementName="firstName")]
    public string FirstName { get; set; }

    [XmlElement(ElementName="lastName")]
    public string LastName { get; set; }

    [XmlAttribute( AttributeName="id" )]
    public string ID { get; set; }
    
    ...
  }
  public class Subscribers 
    : System.ComponentModel.IListSource {

    System.ComponentModel.BindingList<Subscriber> bList;
    ...
    
  }
public class Subscribers 
  : System.ComponentModel.IListSource {

  System.ComponentModel.BindingList<Subscriber> bList;
  ...
  
  public void ToXml(string outputFileName) {   
    StreamWriter w = new StreamWriter(outputFileName); 
    
    XmlRootAttribute root 
     = new XmlRootAttribute("customers");        
      
    XmlSerializerNamespaces ns 
     = new XmlSerializerNamespaces();
    ns.Add("", "");    
    
    XmlSerializer ser 
     = XmlSerializer(bList.GetType(), root)    
    ser.Serialize(w, bList, ns); 
  }    
}    

Now when we call Subscribers to generate XML: new Subscribers().ToXml("customers.xml");, the result is not what we want.

  <?xml version="1.0" encoding="utf-8"?>  
  <customers>
    <Subscriber id="2600CD00">
      <firstName>Pat</firstName>
      <lastName>Thurston</lastName>
      ...
    </Subscriber>
    <Subscriber id="1E9CC4B0">
      <firstName>Kari</firstName>
      <lastName>Furse</lastName>
      ...
    </Subscriber>
    <Subscriber id="60R120B3">
      <firstName>Carl</firstName>
      <lastName>Stuart</lastName> 
      ...
    </Subscriber>
    ...  
  </customers>

If we directly serialize our Subscriber class, for sure, we can alter the element name at the root level:

  <?xml version="1.0" encoding="utf-8"?>    
    <customer id="AE19600F">
      <firstName>Janko</firstName>
      <lastName>Cajhen</lastName>
      ...
    </customer>

Inspired by my another personal project, I fortunately figured out my own solution.

The Solution:

  [XmlRoot("customers")]
  public class Subscribers 
    : System.ComponentModel.IListSource {

    System.ComponentModel.BindingList<Subscriber> bList;
    ...
    
    
    [XmlElement("customer")]
    public List<Subscriber> All {
      get {
        return bList.ToList();
      }
    }
    
    ...
    
    public void ToXml(string outputFileName) {
      StreamWriter w = new StreamWriter(outputFileName); 
        
      XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
      ns.Add("", "");    

      XmlSerializer ser = new XmlSerializer(typeof(Subscribers));      
      ser.Serialize(w, new Subscribers(), ns); 
    }    
  }

Using DataPager in ListView

If the DataSource is not known statically at design time, the DataPager may not work correctly. The following error could be expected to happen when you click on the link provided by DataPager at the second time:

Failed to load viewstate. The control tree into which viewstate is being loaded must match the control tree that was used to save viewstate during the previous request. For example, when adding controls dynamically, the controls added during a post-back must match the type and position of the controls added during the initial request.

This problem occurs because the DataPager has no idea how to perform or calculate paging for you without knowing what page is supposed to display (i.e., StartIndex, and MaximuumRows in the page) when the DataSource is only known at runtime. Thus, you need to provide this missing piece of information to the DataPager before databinding.

Under Google search, you may find that quite a few people implemented the PreRender event of DataPager to perform databinding. Unfortunately, it doesn't work for this scenario. You can bind the data at DataPager's PreRender event but you are unable to supply paging properties to DataPager as mentioned above. Both StartRowIndex and MaximumRows properties are needed to set for paging before databinding. This problem took me a few hours to resolve. It turns out that the solution is very simple.

The Solution: You should add and implement the PagePropertiesChanging event of ListView. The PagePropertiesChangingEventArgs from the event argument will provide all your needy paging properties (StartRowIndex and MaximumRows) so that you can supply them to the DataPager.

    protected void ListView1_PagePropertiesChanging(object sender, PagePropertiesChangingEventArgs e) {     
      this.DataPage1.SetPageProperties(e.StartRowIndex, e.MaximumRows, false);
      BindData();  // set DataSource to ListView and call DataBind() of ListView
    }

If the DataPager is placed inside the ListView, do this:

    protected void ListView1_PagePropertiesChanging(object sender, PagePropertiesChangingEventArgs e) {
      ListView lv = sender as ListView;
      DataPager pager = lv.FindControl("DataPage1") as DataPager;
      pager.SetPageProperties(e.StartRowIndex, e.MaximumRows, false);
      BindData();  // set DataSource to ListView and call DataBind() of ListView
    }

Sunday, April 10, 2011

StructureMap Configuration and Object Creation

Reference: StructureMap - Scoping and Lifecycle Management
My Test version: StructureMap 2.6.1

Configuration for Object Creation

Recently I've used StructureMap in one of my projects and I begin to like it, especially I can control the object life cycle via StructureMap without changing my code. The followings are some ways how to configure StructureMap to create object instance.

  1. Per request basis

    StructureMap by default constructs object instance transiently. Thus, each time you will get a new instance.

        public void NewPerRequest() {
    
          Container mymap = new Container(x => {
            x.For<ISimple>().Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "PerRequest [default]: instances are the same? {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    
  2. Singleton

    StructureMap can inject code for you to apply Singleton design pattern if you want your object remains one and only one instance to be active during the life of the application.

        public void Singleton() {
          Container mymap = new Container(x => {
            x.For<ISimple>().Singleton().Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter =  "Singleton: instances are the same? {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }    
    
  3. HttpContextScoped

    You can configure StructureMap to contruct objects to live in HttpContext scope. This method only uses in Web application.

        public void HttpContextScoped() {
          Container mymap = new Container(x => {
            x.For<ISimple>().HttpContextScoped().Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            ISimple first = mymap.GetInstance<ISimple>();
            ISimple second = mymap.GetInstance<ISimple>();
    
            string formatter = "HttpContextScoped: instances are the same: {0}";
            System.Console.WriteLine(formatter, ReferenceEquals(first, second));
          }
        }
    
  4. HttpContextLifecycle

    Configure StructureMap to contruct objects to live in HttpContext life cycle. This method only uses in Web application.

        public void HttpContextLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpContextLifecycle()).Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            ISimple first = mymap.GetInstance<ISimple>();
            ISimple second = mymap.GetInstance<ISimple>();
    
            string formatter = "HttpContextLifecycle: instances are the same: {0}";
            System.Console.WriteLine(formatter, ReferenceEquals(first, second));
          }
        }
    
  5. HttpSessionLifecycle

    Configure StructureMap to instantiate an object in HttpSession context. Like the previous two methods, it only works in regular Web environment.

        public void HttpSessionLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpSessionLifecycle()).Use<Simple>();
          });
    
    
          #region Create HttpSession environment for test
          // Mock a new HttpContext by using SimpleWorkerRequest
          System.Web.Hosting.SimpleWorkerRequest request =
            new System.Web.Hosting.SimpleWorkerRequest("/", string.Empty, string.Empty, string.Empty, new System.IO.StringWriter());
    
          System.Web.HttpContext.Current = new System.Web.HttpContext(request);
    
          MockHttpSession mySession = new MockHttpSession(
              Guid.NewGuid().ToString(),
              new MockSessionObjectList(),
              new System.Web.HttpStaticObjectsCollection(),
              600,
              true,
              System.Web.HttpCookieMode.AutoDetect,
              System.Web.SessionState.SessionStateMode.Custom,
              false);
    
          System.Web.SessionState.SessionStateUtility.AddHttpSessionStateToContext(
             System.Web.HttpContext.Current, mySession
          );
          #endregion
    
          // now we are ready to test
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "HttpSessionLifecycle: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
    
        }
    
  6. HybridSessionLifecycle

    Those related to HttpContext or HttpSession are only used in ASP.NET Web environment. You won't be able to have your instance to run in client or service environment. HybridSessionLifecycle may be used to resolve it. For this method, HybridSessionLifecycle will try to use HttpContext storage if it exists; otherwise, it uses ThreadLocal storage. However, I found that this method somehow won't work with HttpHandler.

        // According to my experience, 
        // HybridSessionLifecycle works in most situations (including regular Web environment, 
        // Web service, console) but it doesn't work well with HttpHandler while 
        // HybridHttpOrThreadLocalStorage does. 
        public void HybridSessionLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HybridSessionLifecycle()).Use<Simple>();
          });
    
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
      
          string formatter = "HttpSessionLifecycle: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    
  7. HybridHttpOrThreadLocalScoped

    Like HybridSessionLifecycle, it works for most situations. It also work well in a HttpHandler.

       // Unlike HybridSessionLifecycle, it works well with HttpHandlers and WCF Services 
        // besides a regular ASP.NET environment.
        public void HybridHttpOrThreadLocalScoped() {
          Container mymap = new Container(x => {        
            x.For<ISimple>().HybridHttpOrThreadLocalScoped().Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "HybridHttpOrThreadLocalScoped: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    
  8. UniquePerRequestLifecycle

    You can instruct StructureMap to create an unique instance per request in order to ensure no corrupted data and all states in fresh.

        public void UniquePerRequestLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new UniquePerRequestLifecycle()).Use<Simple>();
          });
    
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          string formatter = "HttpContext Scope: instances are the same: {0}";
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
        }
    

The Code

  • NUnit Test

    using System;
    using StructureMapTest.Mocks;
    using StructureMap;
    using StructureMap.Pipeline;
    using StructureMap.Configuration.DSL;
    using NUnit.Framework;
    
    namespace StructureMapTest.UnitTest {
      [TestFixture]
      public class ReferenceEqualTest {
    
        private delegate NUnit.Framework.Constraints.SameAsConstraint CompareConstraintDelegate(object expected);
    
        private void Compare<T>(string formatter, CompareConstraintDelegate sameOrNot) where T : ILifecycle {
          Container mymap = new Container(x => {
            T t = Activator.CreateInstance<T>();
            x.For<ISimple>().LifecycleIs(t).Use<Simple>();
          });
    
          Compare(mymap, formatter, sameOrNot);
        }
    
        private void Compare(Container mymap, string formatter, CompareConstraintDelegate sameOrNot) {
          ISimple first = mymap.GetInstance<ISimple>();
          ISimple second = mymap.GetInstance<ISimple>();
    
          System.Console.WriteLine(formatter, ReferenceEquals(first, second));
    
          // Simulate to 2 situations:
          // 1. Assert.That(first, Is.SameAs(second));
          // 2. Assert.That(first, Is.Not.SameAs(second));
          Assert.That(first, sameOrNot(second));
        }
    
        [Test]
        public void Different_on_per_request_basis() {
    
          Container mymap = new Container(x => {
            x.For<ISimple>().Use<Simple>();
          });
    
          Compare(mymap, "PerRequest [default]: instances are the same? {0}", Is.Not.SameAs);
        }
    
        [Test]
        public void Same_on_Singleton_instance() {
          Container mymap = new Container(x => {
            x.For<ISimple>().Singleton().Use<Simple>();
          });
    
          Compare(mymap, "Singleton: instances are the same? {0}", Is.SameAs);
        }
    
        [Test]
        public void Same_on_HttpContextScoped() {
          Container mymap = new Container(x => {
            x.For<ISimple>().HttpContextScoped().Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            Compare(mymap, "HttpContextScoped: instances are the same: {0}", Is.SameAs);
          }
        }
    
        [Test]
        public void Same_on_HttpContextLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpContextLifecycle()).Use<Simple>();
          });
    
          using (new MockHttpContext()) {
            Compare(mymap, "HttpContextLifecycle: instances are the same: {0}", Is.SameAs);
          }
        }
    
        [Test]
        public void Same_on_HttpSessionLifecycle() {
          Container mymap = new Container(x => {
            x.For<ISimple>().LifecycleIs(new HttpSessionLifecycle()).Use<Simple>();
          });
    
          #region Create HttpSession environment for test
          // Mock a new HttpContext by using SimpleWorkerRequest
          System.Web.Hosting.SimpleWorkerRequest request =
            new System.Web.Hosting.SimpleWorkerRequest("/", string.Empty, string.Empty, string.Empty, new System.IO.StringWriter());
    
          System.Web.HttpContext.Current = new System.Web.HttpContext(request);
    
          MockHttpSession mySession = new MockHttpSession(
              Guid.NewGuid().ToString(),
              new MockSessionObjectList(),
              new System.Web.HttpStaticObjectsCollection(),
              600,
              true,
              System.Web.HttpCookieMode.AutoDetect,
              System.Web.SessionState.SessionStateMode.Custom,
              false);
    
          System.Web.SessionState.SessionStateUtility.AddHttpSessionStateToContext(
             System.Web.HttpContext.Current, mySession
          );
          #endregion
    
          // now we are ready to test
          Compare(mymap, "HttpSessionLifecycle: instances are the same: {0}", Is.SameAs);
        }
    
        // According to my experience, 
        // HybridSessionLifecycle works in most situations (including regular Web environment, 
        // Web service, console) but it doesn't work well with HttpHandler while 
        // HybridHttpOrThreadLocalStorage does. 
        [Test]
        public void Same_on_HybridSessionLifecycle() {
          Compare<HybridSessionLifecycle>("HybridSessionLifecycle: instances are the same: {0}", Is.SameAs);
        }
    
        // Unlike HybridSessionLifecycle, it works well with HttpHandlers and WCF Services 
        // besides a regular ASP.NET environment.
        [Test]
        public void Same_on_HybridHttpOrThreadLocalScoped() {
          Container mymap = new Container(x => {
            //x.For<ISimple>().HybridHttpOrThreadLocalScoped().Use<Simple>().Named("MyInstanceName");
            x.For<ISimple>().HybridHttpOrThreadLocalScoped().Use<Simple>();
          });
          Compare(mymap, "HybridHttpOrThreadLocalScoped: instances are the same: {0}", Is.SameAs);
        }
    
        [Test]
        public void Different_on_UniquePerRequestLifecycle() {
          Compare<UniquePerRequestLifecycle>("HttpContext Scope: instances are the same: {0}", Is.Not.SameAs);
        }
    
      }
    }
    
  • class Simple

    using System;
    
    namespace StructureMapTest.Mocks {
      public interface ISimple {
        void DoSomething();
      }
    
      public class Simple : ISimple {
        public void DoSomething() {
        }
      }
    }
    
  • class MockHttpContext

    using System;
    
    namespace StructureMapTest.Mocks {
      public class MockHttpContext : IDisposable {
        private readonly System.IO.StringWriter sw;
    
        public MockHttpContext() {
          var httpRequest = new System.Web.HttpRequest("notExisted.aspx", "http://localhost", string.Empty);
          sw = new System.IO.StringWriter();
          var httpResponse = new System.Web.HttpResponse(sw);
          System.Web.HttpContext.Current = new System.Web.HttpContext(httpRequest, httpResponse);
        }
    
        public void Dispose() {
          sw.Dispose();
          System.Web.HttpContext.Current = null;
        }
      }
    }
    
  • class MockHttpSession and class MockSessionObjectList

    using System;
    
    namespace StructureMapTest.Mocks {
      // This class follows the example at:
      // http://msdn.microsoft.com/en-us/library/system.web.sessionstate.ihttpsessionstate.aspx
      public sealed class MockHttpSession : System.Web.SessionState.IHttpSessionState {
        ...
      } 
    } 
    
    using System;
    
    namespace StructureMapTest.Mocks {
      // This class follows the example at 
      // http://msdn.microsoft.com/en-us/library/system.web.sessionstate.isessionstateitemcollection(v=VS.90).aspx
      public class MockSessionObjectList : System.Web.SessionState.ISessionStateItemCollection {
        ...
      } 
    }  
    

Thursday, February 24, 2011

Why validators error messages not displayed in Google Chrome?

I asked myself several times. Why are validators error messages not displayed in Google Chrome? It is weird. Not a single validation error message is displayed in the page as soon as the following CSS is added. It works well with IE, Safari and Mozilla-type browsers but fails on Google Chrome.

  div.sink {padding:20px; width: 190px;border:1px silver solid;}
  div.sink input[type="text"]{width: 190px;}
  div.sink input[type="submit"]{width: 50px;}

I am sure that all the validators are fired and the page as a result reports invalid but there is no error message. There is no doubt that the problem is somehow related to the above CSS. To reduce the complexity of the page and easily to nail down the problem, I marked up a simple login page for investigation.

login page

With a bit modification, the page looks like this - very simple, only text boxes, RequiredFieldValidators and a login button. There is no code behind (no OnClick event) and they are straightly marks up.

<head>
<style type="text/css">

div.login {padding:20px; width: 190px; border:1px silver solid;}
div.login input[type="text"], div.login input[type="password"]{ width: 190px; }
div.login input[type="submit"]{width: 50px;}

</style>
</head>
<body>
<form id="form1" runat="server">
<div class="login">

  User Name:<br />
  <asp:TextBox id="txtUsername" runat="server" MaxLength="30"></asp:TextBox><br />     
  <asp:RequiredFieldValidator ID="RequiredFieldValidatorUsername" runat="server" 
       ControlToValidate="txtUsername" Display="Dynamic" 
       ErrorMessage="A username is required."></asp:RequiredFieldValidator><br />

  Password:<br />
  <asp:TextBox id="txtPassword" runat="server"  TextMode="Password" MaxLength="180" ></asp:TextBox><br />     
  <asp:RequiredFieldValidator ID="RequiredFieldValidatorPassword" runat="server" 
       ControlToValidate="txtPassword" Display="Dynamic" 
       ErrorMessage="A password is required."></asp:RequiredFieldValidator><br />

 <asp:Button ID="btnLogin" runat="server" Text="Log In" /><br />

</div>
</form>
</body>

With the above markup, the error message is shown at least for the User Name, unlike my problem page. The funny is that in this simple example, only the last validator won't be able to display error message if I keep adding more the similar control and validator. For sure, everything works normal if the CSS is removed.

No error message for the only and one validator

no error message for the only and one validator
No error message for the last validator
screen 1

no error message for the last validator - screen 1
No error message for the last validator
screen 2

no error message for the last validator - screen 2
No error message for the last validator
screen 3

no error message for the last validator - screen 3

The CSS is not complex. As a matter of face, it is very simple. What it does is to set the width for the input box or button in the page. To resolve this, I can simply add the width back to the control inself by removing the CSS. However, I cannot do it with my original page. Most controls are from user controls which won't allow me to set width. In addition, If I manage to add the width to them, the changes will affect other pages across the entire project unless I do it programically instead of static. It means I have to go in every single user control to add some code so that it can accept width change. I don't think that I want to go this route.

After playing around for some time, I finally found out the problem. The problem is the width setting in the first line.

  div.login {padding:20px; width: 190px; border:1px silver solid;}
  div.login input[type="text"], div.login input[type="password"]{ width: 190px; }
  div.login input[type="submit"]{width: 50px;}

Adding 4 more pixels fixes the problem!

  div.login {padding:20px; width: 194px; border:1px silver solid;}
  div.login input[type="text"], div.login input[type="password"]{ width: 190px; }
  div.login input[type="submit"]{width: 50px;}

It appears to me that Google Chrome browser cares about the outer width of the DIV when a CSS is specified. Most browsers may ignore it if the outer DIV is not width enough for its child components. From this instance, I learn that I need to pay attention to declare my CSS in order to prevent this from happening.

Saturday, February 12, 2011

Resource interpreted as image but transferred with MIME type text/html

One of my pages failed on Google Chrome. Because of the subject error, I could not debug it with Chrome debugger. The script won't be loaded into the debugger if the page is interpreted as an image. I wonder why and what caused it and I could not ignore it.

After hours looking into the code, I finally found the problem. In my case, I forget to include a JavaScript library reference in one of my user controls. As soon as I added it back, the error is gone and the page can be loaded into the Chrome debugger as other normal pages. Everything works fine.

Although the error message from Chrome is confusing, it at least alerts me that there is something wrong in my page. I am glad I found the problem and fixed it.

Why page events get called twice?

There are a lot of discussions about why the Page events get called twice or even multiple times. Most the solutions from Google search suggested to set AutoEventWireup to false. By default, it is true. Is it really helpful? For sure, a lot of people will disagree. Me either. The solution of AutoEventWireup doesn't stand for all cases. To me, most of the time, this issue is related to image resource.

Consider the following HTML code in the ASP.NET page markup.

<img src="" id="img1" alt="an image" style="display:none" />   cross

The HTML is valid but it is the trouble maker in ASP.NET Web form, which causes the page events being called twice because the src attribute is empty. Removing the src attribute entirely will fix the problem.

Let's consider another image related scenario. What if you want to set a background color in a table cell but mistakenly put background attribute instead of background-color?

<TD background="#008080">   cross

In this case, page events are also being called twice. But if HTML color name is used, everything will work fine. Thus the following HTML code won't cause trouble although it is not correct.

<TD background="Teal">    tickquestion mark

I am not quite sure how or when CalliHelper.EventArgFunctionCaller will bind the above scenarios to the page events. None of them are declared to run as server. But I do know that if the page events are being called twice, the first thing I will check is the markup, especially those HTML codes related to image resource.

Using From Target Attribute

[ Scenario ] [ The Solution ] [ Implemented with JavaScript ] [ Implemented with JQuery ] [ The Code ]

Problem Statement

A page may have several links and buttons to produce different forms of outout. Every single piece of data defined in the page including the input supplied by the user is needed for postback process. Is there a simple way to redirect the output in another window without overriding the current page so that the current page content stays perisistent?

Solution 1: Can we use target attribute of the element?

For a hyperlink, you may think using target attribute to accomplish this. With this approach, you have to gather all the needy input yourself from the current page into the querystring of the URL before posting data back to the server. This will introduce you another problem: querystring size limitation. Different plaforms and browser types have different limitations. Least to say, there is some work to be done in the client side. In addition, your server page must be configured and coded to handle HTTP GET to process the request.

For input button, we are out of luck. There isn't a target attribute defined in the specification. Thus, the action is completely relying on the form action.

Solution 2: How about AJAX?

Another approach to resolve this is using partial page rendering that would provide better user experience by eliminating the full page round trip overhead. Does it really matter in our case if we need every single piece of data defined or entered by the user in that page? It may save some overhead on some resources being loaded. Other than that, every data specified in that page is still posted back to the server for process. What we want is the output in another window so that the user may continue to use the same data to generate another form of output in a separate window and so on. Indeed, using AJAX is a bit complicated here. Of course, you may use some library like JQuery to set up your AJAX calls. For ASP.NET, you can simply accomplish this by using ScriptManager and UpdatePanel. Unfortunately, all of them require you to re-architect your page to accommodate the changes. To me, it is too much work. I need something simpler and make changes as least as possible in the page.

The Ultimate and Simple Solution: by using target attribute of the form.

The solution is to use the old trick defined in HTML with a little Javascript assistance.

In order to post data back to the server, we need a form tag. We usually don't specify the target attribute. In this case, the data is returned to the current page by default. If a target attribute is specified in the form tag, the output will be automatically routed to the specific frame or window by the browser when the data is returned by the server. For example,

Route data to a new window:

<form name="form1" id="form1" target="_blank" method="post" action="Default.aspx" onsubmit="..." >
...
</form>
</pre>

Route data to an iframe:

<form name="form1" id="form1" target="myIframe" method="post" action="Default.aspx" onsubmit="...">
...
</form>
<iframe name="myIframe" id="myIframe" width="400" height="300"></iframe>

Note that the name attribute of the iframe is required in this case. By specification, we should specify the_name_of_the_frame_or_window to the target attribute, not the_id_of_the_frame_or_window. But the id works for some Webkit type of browsers like Google Chrome.

With this simple solution in mind, I finally come up a way to unpuzzle the above problem with a very minor change in the page. The following is the solution presented in ASP.NET. The technique can be applied to elsewhere such as a simple HTML/CGI program.

What I need to do is dynamically adding a target attribute to the form before the data is being posted back to the server.

JavaScript Solution

First, let's define the script which does the injection. The JavaScript function may look like the following.

  function changeFormTarget() {
    var formObj = document.getElementById('form1');
    formObj.setAttribute('target', '_blank');
  }

What the above code does is, before posting data back to the server for process, it injects the target attribute into the form element by producing the following HTML code, which instructs the browser where to output the next document.

<form name="form1" id="form1" target="_blank" method="post" action="Default.aspx" onsubmit="...">

Then, we hook this function to the onclick event of the element before form submission. In ASP.NET, simply add OnClientClick event on the control to instruct the ASP.NET engine to process the client script first before postback.

<asp:LinkButton ID="lnkRptToNewTarget" runat="server" OnClientClick="changeFormTarget()" OnClick="RptToNewTarget_Click">Generate Report to New Target</asp:LinkButton><br />
<asp:Button ID="btnRptToNewTarget" runat="server" OnClientClick="changeFormTarget()" OnClick="RptToNewTarget_Click" Text="Generate Report to New Target"  />

The above markup will result in the following HTML code:

<a onclick="changeFormTarget();" id="lnkRptToNewTarget" 
   href="javascript:WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("lnkRptToNewTarget", "", true, "", "", false, true))">Generate Report to New Target</a><br />
<input 
   type="submit" name="btnRptToNewTarget" id="btnRptToNewTarget"
   value="Generate Report to New Target" 
   onclick="changeFormTarget();WebForm_DoPostBackWithOptions(new WebForm_PostBackOptions("btnRptToNewTarget", "", true, "", "", false, false))" />

As you can see, with the very minimal changes in the page, a new window will be spawned off from the current page when the result is back.

JQuery Solution

If you're using JQuery, the solution is even simpler without changing any element in the page. Simply put the following script and then the onclick event will be automatically wired to the appropriate elements. In my example, simply ask JQuery to scan my elements and then wire them up with my supplied onclick event.

  $('#lnkRptToNewTarget,#btnRptToNewTarget').click(function() {
    $('form').attr('target', '_blank');
  });

With JQuery, basically the page remains intact without any markup or element being changed. The result is the same as JavaScript approach. Of course, you can write your own auto event wire up to achieve what JQuery does but there will be a lot of work.

Sometimes a simple solution works like a charm and it also eliminates the time to re-test and shortens the development time. I hope you will find this piece of information somehow useful.

The Code

If you want to test it yourself and see how this works, here is my simple backend event handler for output. In real life, the handler could be in another process which generates PDF or other non-HTML types of documents, and then call Response.Redirect() or Response.TransmitFile() to return the document to the client.

  protected void RptToNewTarget_Click(object sender, EventArgs e) {
    Response.Write(
      string.Format("Your report <b>{0}</b> is generated.", this.txtReportName.Text));
    Response.End();
  }

Here is the markup for JQuery approach. You can simply alter it to JavaScript solution that I discussed above.

<form id="form1" runat="server" defaultfocus="txtReportName">
<div>
  Enter Report Name: <br />
  <asp:TextBox ID="txtReportName" runat="server"></asp:TextBox>
  <asp:RequiredFieldValidator ID="RequiredFieldValidatorTxtReportName" runat="server" 
       ErrorMessage="Please enter the report name." Display="Dynamic" ControlToValidate="txtReportName"></asp:RequiredFieldValidator><br />
  <asp:LinkButton ID="lnkRptToNewTarget" runat="server" OnClick="RptToNewTarget_Click">Generate Report to New Target</asp:LinkButton><br />
  <asp:Button ID="btnRptToNewTarget" runat="server" OnClick="RptToNewTarget_Click" Text="Generate Report to New Target"  />
</div>
</form>

<script type="text/javascript">
  $('#lnkRptToNewTarget,#btnRptToNewTarget').click(function() {
    $('form').attr('target', '_blank');
  });
</script>

Sunday, January 23, 2011

RealPlayer Free Version Offline Installers for Version 14.0.1.609

Do you have a hard time to locate RealPlayer Free version installer for offline use? Let me share some standalone installers URLs that I discovered for specific regions and languages. How to discover the offline installer download URLs is presented at the end of this post.

English - United States
English - United Kingdom
Traditional Chinese - Taiwan
Simplified Chinese - Mainland China
Japanese - Japan
Deutsch - Germany
French - France

RealPlayer Version:   14.0.1.609
Installer File Version:  12.0.1.609

If you don't care the latest version of RealPlayer and don't mind using a version older, you can simply go RealPlayer old version page for download. This page offers older standalone installers for offline use including all supported languages.


URL Patterns

Looking at the URLs, I found that only the first two query strings are the keys for getting offline installers.

Every URL straightly follows the same pattern (except for English US. The main difference between them is the distcode, which consists of the version (i.e., R61) and the language or region code (i.e., UK, TW, CN, JA, DE, and FR) and plus a character D. Anyway, you should get what I mean and go figure out for other regions if you're interested.

How to Find out Where to Download Offline Installer

  • Go to the Real Website in your region
  • Download the Web version. Free version won't ask you for email address. If so, you're downloading the wrong version. Some regions may say a basic version instead of free.
  • Ensure no Internet connection and then run the Web installer.
  • Wait until its error out. It will take a while, possibly about 3 minutes.
  • As soon as the Web installer gives up, it will offer you a re-download RealPlayer option. Click that button and then the Web installer will bring up the download URL at the browser.
  • Grab the URL and go Internet for download

Adobe Reader Offline Installers

The easy way to download an Adobe Reader standalone installer for offline use is to get it from Adobe FTP site. And then choose your OS, your desired Adobe version, your language and then download.

Note that sometimes browsing/accessing to the Adobe ftp site is a bit slow.

As of today, the current version is still Adobe Reader 10 (as known as Adobe Reader X). The followings are some links for Adobe Reader v10.0.0 in some languages used on Windows.

Major Adobe Reader Supported Platforms

Windows offline Adobe Reader Installer
Apple Machintosh offline Adobe Reader Installer
Unix/Linux offline Adobe Reader Installer
Android mobile device offline Adobe Installer

For Adobe Reader news update, you can visit their blog.

Monday, January 10, 2011

Registry Mechanic vs AML Free Registry Cleaner

Despite all the good reviews about PC Tools Registry Mechanic, I am disappointed by it.

I don't use registry cleaner but lately I would like to try it out and possibly find one to add into my collection. Under google search, Registry Mechanic by PC Tools is highly recommended by most reviews I have read. Unfortunately, I would not recommend it.

I did a very simple test myself a week ago on one of my virtual machines. This machine is always abused to use for bit torrent, software tryout, far-sites movie watching and etc. Thus I know its registry could be a mess. Because it is a virtual machine, it will be easy for me to recover it if something goes wrong during this registry test. In the course of searching good registry cleaner, AML Free Registry Cleaner is another good one on my mind. Thus, this test is Registry Mechanic vs AML. If you seek for professional opinions on registry cleaners, you may have to go some place else. Here is simply my opinion.

Registry Mechanic vs AML Free Registry Cleaner:

Test Targets Registry Mechanic AML Free Registry Cleaner
Version 10.0.134    (currently also known as
Registry Cleaner 2011)
4.21
Price Free to try
(First 6 sections repairing trial);
$29.95 to buy
Free

Test machine: Windows XP SP3.

Test Results based on problem found and detected:
Registry Mechanic 2011: 150 problems found
AML Free Registry Cleaner: 327 problems found

I went ahead to fix all problems found by AML and run the test twice (before and after reboot). Both results are the same and shown below:

Registry Mechanic 2011: 17 problems found
AML Free Registry Cleaner: 0 problem found

I don't find any abnormality after the fix by AML. Everything is working fine.

I cannot use Registry Mechanic to fix all problems it found because of its trial version restriction. However, I did something else interesting that may be an interest to you.

At pctools site, it features Registry Mechanic 2011 supports 17 languages but there is no further information about which languages it actually supports (or I could not find this information nor separate installers for other languages). When I installed it, it would not present me a language selection. I was curious how well its language detection was. Since that XP installed in this virtual machine is a multilingual version, I started fresh and switched to Traditional Chinese to test its language detection.

Why Traditional Chinese? Most software cannot handle well Traditional Chinese. They usually mistake the OS as Simplified Chinese if they try to be smart without letting the user to make a choice. Registry Mechanic suffers the same problem.

Similar to AML, the installation of Registry Mechanic is simple and straightforward. AML supports English only while Registry Mechanic claims to support multiple languages. Unfortunately it won't present you a language selection and it tries to be smart to use the language to match your OS. Sometimes it is not what you want. The worst is that it could present you a wrong UI language. Registry Mechanic mistakenly presented me everything in Simplified Chinese while my OS language is in Traditional Chinese. Because of this problem, some characters on screen are garbags.

As I mentioned before, I could not find the details of which 17 languages Registry Mechanic supports. Based on my test, it doesn't support Traditional Chinese but Simplified. To me, it is a serious problem for a software (which claims to support multiple languages but provides no language selection) to present the wrong UI language. Indeed, a lot of users would prefer to choose their languages.

There is one thing that annoys me most I found with AML Free Registry Cleaner. When it starts, it is always in a maximized mode. I personally don't like it. There is no way to configure AML to remember my preference. Other than this, I am fond of AML.

AML Free Registry Cleaner comes with free Disk Cleaner option. At first, I was surprised it found a lot of tmp files while CCleaner reported nothing about them. Later, I found that those tmp files are created everytime I reboot. Thus, I won't be able to get rid of them because they could be used or created by the system or the software (like anti-virus) you installed. They are needed whenever the system restarts. Thus, AWL Disk cleaner at this case is no use.

Conclusion

Between Registry Mechnanic and AML, I would highly recommend AML Free Registry Cleaner. The most importances are that AML detects more problems than Registry Mechanics and it is free!