2023年3月

现在要实现的是给Web编辑器添加控件工具栏,既可以通过双击添加控件到编辑器,也可以通过拖拽的方式添加,类似于Visual Studio中的设计器。双击添加容易实现,麻烦的是拖拽。幸好网上查到相关文档,按照
《Internet Explorer 编程》
通过实现一个IDropTarget,有两种可能的方案:
第一种方案,在IDropTarget的函数DragEnter、DragOver中,针对自定义拖拽做特殊处理,直接将pdwEffect设置为DROPEFFECT_COPY,然后在Drop函数中处理数据。
第二种方案,给IDataSource添加CF_TEXT格式和自定义格式,由于编辑器允许CF_TEXT数据被Drop,这样就可以不改动DragEnter、DragOver的实现,而只需要在Drop中处理数据。由于并不是编辑器中的任何区域都可以Drop的,这样做还有一个额外的好处是,编辑器的缺省实现帮我们做好了当前是否可以Drop的判断。
显然第二种方案较好。

原文:《Fast, Scalable, Streaming AJAX Proxy - continuously deliver data from across domains》
作者:
Omar Al Zabir
URL:
http://www.codeproject.com/KB/ajax/ajaxproxy.aspx

image

Introduction

Due to browsers' prohibition on cross domain XMLHTTP calls, all AJAX websites must have a server side proxy to fetch content from external domains like Flickr or Digg. From the client-side JavaScript code, an XMLHTTP call goes to the server-side proxy hosted on the same domain, and then the proxy downloads the content from the external server and sends back to the browser. In general, all AJAX websites on the Internet that are showing content from external domains are following this proxy approach, except for some rare ones who are using JSONP. Such a proxy gets a very large number of hits when a lot of components on the website are downloading content from external domains. So, it becomes a scalability issue when the proxy starts getting millions of hits. Moreover, a web page's overall load performance largely depends on the performance of the proxy as it delivers content to the page. In this article, we will take a look at how we can take a conventional AJAX Proxy and make it faster, asynchronous, continuously stream content, and thus make it more scalable.

AJAX Proxy in Action

You can see such a proxy in action when you go to
Pageflakes.com
. You will see flakes (widgets) loading many different content like weather feed, flickr photo, YouTube videos, and RSS from many different external domains. All these are done via a
Content Proxy
. The Content Proxy served about
42.3 million URLs
last month, which is quite an engineering challenge for us to make it both fast and scalable. Sometimes the Content Proxy serves megabytes of data, which poses an even greater engineering challenge. As such, the proxy gets a large number of hits; if we can save on an average of 100ms from each call, we can save
4.23 million seconds
of download/upload/processing time every month. That's about 1175 man hours wasted throughout the world by millions of people staring at a browser waiting for content to download.

Such a content proxy takes an external server's URL as a query parameter. It downloads the content from the URL, and then writes the content as the response back to the browser.

image

Figure: Content proxy working as a middleman between the browser and the external domain

The above timeline shows how a request goes to the server and then the server makes a request to the external server, downloads the response, and then transmits it back to the browser. The response arrow from the proxy to the browser is larger than the response arrow from the external server to the proxy because generally, a proxy server's hosting environment has a better download speed than the user's Internet connectivity.

A Basic Proxy

Such a content proxy is also available in my open source AJAX Web Portal,
Dropthings.com
. You can see from
its code from CodePlex
how such a proxy is implemented.

The following is a very simple, synchronous, non-streaming, blocking proxy:


Collapse
[WebMethod]
[ScriptMethod(UseHttpGet=true)]
public string GetString(string url)
{
using (WebClient client = new WebClient())
{
string response = client.DownloadString(url);
return response;
}
}
}

Although it shows the general principle, it's no where close to a real proxy, because:

  • It's a synchronous proxy and thus not scalable. Every call to this web method causes the ASP.NET thread to wait until the call to the external URL completes.
  • It's
    non streaming
    . It first downloads the entire content on the server, storing it in a string and then uploading that entire content to the browser. If you pass an
    MSDN feed URL
    , it will download that gigantic 220 KB RSS XML on the server and store it on a 220 KB long string (actually, double the size as .NET strings are all Unicode), and then write the 220 KB to an ASP.NET Response buffer, consuming another 220 KB UTF8 byte array in memory. Then, that 220 KB will be passed to IIS in chunks so that it can transmit it to the browser.
  • It does not produce a proper response header to cache the response on the server. Nor does it deliver important headers like
    Content-Type
    from the source.
  • If an external URL is providing gzipped content, it decompresses the content into a string representation and thus wastes server memory.
  • It does not cache the content on the server. So, repeated calls to the same external URL within the same second or minute will download content from the external URL and thus waste bandwidth on your server.

We need an asynchronous
streaming proxy
that transmits the content to the browser while it downloads from the external domain server. So, it will download bytes from the external URL in small chunks and immediately transmit that to the browser. As a result, the browser will see a continuous transmission of bytes right after calling the web service. There will be no delay while the content is fully downloaded on the server.

A Better Proxy

Before I show you the complex streaming proxy code, let's take an evolutionary approach. Let's build a better Content Proxy than the one shown above, which is synchronous and non-streaming, but does not have the other problems mentioned above. We will build an HTTP Handler named
RegularProxy.ashx
which will take a URL as a query parameter. It will also take a cache as a query parameter which it will use to produce proper response headers in order to cache the content on the browser. Thus, it will save the browser from downloading the same content again and again.


Collapse
using System;
using System.Web;
using System.Web.Caching;
using System.Net;
using ProxyHelpers;
public class RegularProxy : IHttpHandler {
public void ProcessRequest (HttpContext context) {
string url = context.Request["url"];
int cacheDuration = Convert.ToInt32(context.Request["cache"]?? "0");
string contentType = context.Request["type"];
//We don't want to buffer because we want to save memory
context.Response.Buffer = false;
//Serve from cache if available
if (context.Cache[url] != null)
{
context.Response.BinaryWrite(context.Cache[url] as byte[]);
context.Response.Flush();
return;
}
using (WebClient client = new WebClient())
{
if (!string.IsNullOrEmpty(contentType))
client.Headers["Content-Type"] = contentType;
client.Headers["Accept-Encoding"] = "gzip";
client.Headers["Accept"] = "*/*";
client.Headers["Accept-Language"] = "en-US";
client.Headers["User-Agent"] =
"Mozilla/5.0 (Windows; U; Windows NT 6.0; " +
"en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6";
byte[] data = client.DownloadData(url);
context.Cache.Insert(url, data, null,
Cache.NoAbsoluteExpiration,
TimeSpan.FromMinutes(cacheDuration),
CacheItemPriority.Normal, null);
if (!context.Response.IsClientConnected) return;
//Deliver content type, encoding and length
//as it is received from the external URL
context.Response.ContentType =
client.ResponseHeaders["Content-Type"];
string contentEncoding =
client.ResponseHeaders["Content-Encoding"];
string contentLength =
client.ResponseHeaders["Content-Length"];
if (!string.IsNullOrEmpty(contentEncoding))
context.Response.AppendHeader("Content-Encoding",
contentEncoding);
if (!string.IsNullOrEmpty(contentLength))
context.Response.AppendHeader("Content-Length",
contentLength);
if (cacheDuration > 0)
HttpHelper.CacheResponse(context, cacheDuration);
//Transmit the exact bytes downloaded
context.Response.BinaryWrite(data);
}
}
public bool IsReusable {
get {
return false;
}
}
}

There are two enhancements in this proxy:

  • It allows server side caching of content. The same URL requested by a different browser within a time period will not be downloaded on the server again, instead it will be served from a cache.
  • It generates a proper response cache header so that the content can be cached on the browser.
  • It does not decompress the downloaded content in memory. It keeps the original byte stream intact. This saves memory allocation.
  • It transmits the data in a non-buffered fashion, which means the ASP.NET
    Response
    object does not buffer the response and thus saves memory.

However, this is a blocking proxy.

Even Better Proxy - Stream!

We need to make a streaming asynchronous proxy for better performance. Here's why:

image

Figure: Continuous streaming proxy

As you see, when data is transmitted from the server to the browser while the server downloads the content, the delay for the server-side download is eliminated. So, if the server takes 300ms to download something from an external source, and then 700ms to send it back to the browser, you can save up to 300ms Network Latency between the server and the browser. The situation gets even better when the external server that serves the content is slow and takes quite some time to deliver the content. The slower the external site is, the more saving you get in this continuous streaming approach. This is significantly faster than the blocking approach when the external server is in Asia or Australia and your server is in the USA.

The approach for a continuous proxy is:

  • Read bytes from the external server in chunks of 8KB from a separate thread (reader thread) so that it's not blocked.
  • Store the chunks in an in-memory Queue called Pipe Stream.
  • Write the chunks to ASP.NET Response from that same queue.
  • If the queue is finished, wait until more bytes are downloaded by the reader thread.

image

The Pipe Stream needs to be thread-safe, and it needs to support blocking-read. By blocking-read, it means, if a thread tries to read a chunk from it and the stream is empty, it will suspend that thread until another thread writes something on the stream. Once a write happens, it will resume the reader thread and allow it to read. I have taken the code of
PipeStream
from the
CodeProject article by James Kolpack
and extended it to make sure it has high performance, supports chunks of bytes to be stored instead of single bytes, supports timeout on waits, and so on.

I did some comparison between a regular proxy (blocking, synchronous, download all then deliver) and a streaming proxy (continuous transmission from the external server to the browser). Both proxy downloads the MSDN feed and delivers it to the browser. The time taken here shows the total duration of the browser making the request to the proxy and then getting the entire response:

image

Figure: Time taken by a streaming proxy vs. a regular proxy while downloading the MSDN feed

Not a very scientific graph, and the response time varies on the link speed between the browser and the proxy server and then from the proxy server to the external server. But, it shows that most of the time, the streaming proxy outperformed the regular proxy.

image

Figure: Test client to compare between a regular proxy and a streaming proxy

You can also test both proxy's response times by going to
this link
. Put your URL and hit Regular/Stream button, and see the "Statistics" text box for the total duration. You can turn on "Cache response" and hit a URL from one browser. Then, go to another browser and hit the URL to see the response coming from the server cache directly. Also, if you hit the URL again on the same browser, you will see that the response comes instantly without ever making a call to the server. That's browser cache at work.

Learn more about HTTP Response caching from my blog post:
Making the best use of cache for high performance websites
.

A Visual Studio Web Test run inside a Load Test shows a better picture:

image

Figure: Regular proxy load test result shows
Average Requests/Sec is 0.79
and
Average Response Time 2.5 sec

image

Figure: Streaming proxy load test result shows
Average Requests/Sec is 1.08
and
Average Response Time 1.8 sec
.

From the above load test results, the streaming proxy has
26% better Requests/Sec, and the Average Response Time is 29% better
. The numbers may sound small, but at
Pageflakes
, 29% better response time means
1.29 million seconds
saved per month for all the users on the website. So, we are effectively saving 353 man hours per month, which was wasted staring at the browser screen while it downloads content.

Building the streaming proxy

It was not straightforward to build a streaming proxy that can outperform a regular proxy. I tried three ways to finally find the optimal combination that can outperform a regular proxy.

The streaming proxy uses
HttpWebRequest
and
HttpWebResponse
to download data from an external server. They are used to gain more control over how data is read, more specifically, read in chunks of bytes that
WebClient
does not offer. Moreover, there's some optimization in building a fast scalable
HttpWebRequest
that this proxy requires.


Collapse
public class SteamingProxy : IHttpHandler
{
const int BUFFER_SIZE = 8 * 1024;
private Utility.PipeStream _PipeStream;
private Stream _ResponseStream;
public void ProcessRequest (HttpContext context)
{
string url = context.Request["url"];
int cacheDuration = Convert.ToInt32(context.Request["cache"] ?? "0");
string contentType = context.Request["type"];
if (cacheDuration > 0)
{
if (context.Cache[url] != null)
{
CachedContent content = context.Cache[url] as CachedContent;
if (!string.IsNullOrEmpty(content.ContentEncoding))
context.Response.AppendHeader("Content-Encoding",
content.ContentEncoding);
if (!string.IsNullOrEmpty(content.ContentLength))
context.Response.AppendHeader("Content-Length",
content.ContentLength);
context.Response.ContentType = content.ContentType;
content.Content.Position = 0;
content.Content.WriteTo(context.Response.OutputStream);
}
}
HttpWebRequest request =
HttpHelper.CreateScalableHttpWebRequest(url);
//As we will stream the response, don't want
//to automatically decompress the content
//when source sends compressed content
request.AutomaticDecompression = DecompressionMethods.None;
if (!string.IsNullOrEmpty(contentType))
request.ContentType = contentType;
using (new TimedLog("StreamingProxy\tTotal " +
"GetResponse and transmit data"))
using (HttpWebResponse response =
request.GetResponse() as HttpWebResponse)
{
this.DownloadData(request, response, context, cacheDuration);
}
}

The
DownloadData
method downloads data from the response stream (connected to the external server) and then delivers it to the ASP.NET
Response
stream.


Collapse
private void DownloadData(HttpWebRequest request, HttpWebResponse response,
HttpContext context, int cacheDuration)
{
MemoryStream responseBuffer = new MemoryStream();
context.Response.Buffer = false;
try
{
if (response.StatusCode != HttpStatusCode.OK)
{
context.Response.StatusCode = (int)response.StatusCode;
return;
}
using (Stream readStream = response.GetResponseStream())
{
if (context.Response.IsClientConnected)
{
string contentLength = string.Empty;
string contentEncoding = string.Empty;
ProduceResponseHeader(response, context, cacheDuration,
out contentLength, out contentEncoding);
//int totalBytesWritten =
//TransmitDataInChunks(context, readStream, responseBuffer);
//int totalBytesWritten =
//TransmitDataAsync(context, readStream, responseBuffer);
int totalBytesWritten = TransmitDataAsyncOptimized(context,
readStream, responseBuffer);
if (cacheDuration > 0)
{
#region Cache Response in memory
//Cache the content on server for specific duration
CachedContent cache = new CachedContent();
cache.Content = responseBuffer;
cache.ContentEncoding = contentEncoding;
cache.ContentLength = contentLength;
cache.ContentType = response.ContentType;
context.Cache.Insert(request.RequestUri.ToString(),
cache, null, Cache.NoAbsoluteExpiration,
TimeSpan.FromMinutes(cacheDuration),
CacheItemPriority.Normal, null);
#endregion
}
}
context.Response.Flush();
}
}
catch (Exception x)
{
Log.WriteLine(x.ToString());
request.Abort();
}
}

Here, I have tried three different approaches. The one that's uncommented now, called
TransmitDataAsyncOptimized
, is the best approach. I will explain all the three approaches soon. The purpose of the
DownloadData
function is to prepare the ASP.NET Response stream before sending data. Then, it sends the data using one of the three approaches and caches the downloaded bytes in a memory stream.

The first approach was to read 8192 bytes from the response stream that's connected to the external server and then immediately write it to the response (
TransmitDataInChunks
).


Collapse
private int TransmitDataInChunks(HttpContext context, Stream readStream,
MemoryStream responseBuffer)
{
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead;
int totalBytesWritten = 0;
while ((bytesRead = readStream.Read(buffer, 0, BUFFER_SIZE)) > 0)
{
context.Response.OutputStream.Write(buffer, 0, bytesRead);
responseBuffer.Write(buffer, 0, bytesRead);
totalBytesWritten += bytesRead;
}
return totalBytesWritten;
}

Here,
readStream
is the response stream received from the
HttpWebResponse.GetResponseStream
call. It's downloading from the external server.
responseBuffer
is just a memory stream to hold the entire response in memory so that we can cache it.

This approach was even slower than a regular proxy. After doing some code level performance profiling, it looks like writing to
OutputStream
takes quite some time as IIS tries to send the bytes to the browser. So, there was the delay of network latency + the time taken to transmit a chunk. The cumulative network latency from frequent calls to
OutputStream.Write
added significant delay to the total operation.

The second approach was to try multithreading. A new thread launched from the ASP.NET thread continuously reads from
Socket
without ever waiting for the
Response.OutputStream
that sends the bytes to the browser. The main ASP.NET thread waits until bytes are collected, and then transmits them to the response immediately.


Collapse
private int TransmitDataAsync(HttpContext context, Stream readStream,
MemoryStream responseBuffer)
{
this._ResponseStream = readStream;
_PipeStream = new Utility.PipeStreamBlock(5000);
byte[] buffer = new byte[BUFFER_SIZE];
Thread readerThread = new Thread(new ThreadStart(this.ReadData));
readerThread.Start();
int totalBytesWritten = 0;
int dataReceived;
while ((dataReceived = this._PipeStream.Read(buffer, 0, BUFFER_SIZE)) > 0)
{
context.Response.OutputStream.Write(buffer, 0, dataReceived);
responseBuffer.Write(buffer, 0, dataReceived);
totalBytesWritten += dataReceived;
}
_PipeStream.Dispose();
return totalBytesWritten;
}

Here, the read is performed on the
PipeStream
instead of the socket from the ASP.NET thread. There's a new thread spawned which writes data to
PipeStream
as it downloads bytes from the external site. As a result, we have the ASP.NET thread writing data to
OutputStream
continuously, and there's another thread that's downloading data from the external server uninterrupted. The following code downloads data from the external server and then stores in the
PipeStream
.


Collapse
private void ReadData()
{
byte[] buffer = new byte[BUFFER_SIZE];
int dataReceived;
int totalBytesFromSocket = 0;
try
{
while ((dataReceived = this._ResponseStream.Read(buffer, 0,
BUFFER_SIZE)) > 0)
{
this._PipeStream.Write(buffer, 0, dataReceived);
totalBytesFromSocket += dataReceived;
}
}
catch (Exception x)
{
Log.WriteLine(x.ToString());
}
finally
{
this._ResponseStream.Dispose();
this._PipeStream.Flush();
}
}

The problem with this approach is that, there are still too many
Response.OutputStream.Write
calls happening. The external server delivers content in variable number of bytes, sometimes 3592 bytes, sometimes 8192 bytes, and sometimes only 501 bytes. It all depends on how fast the connectivity from your server to the external server is. Generally, Microsoft servers are only a door step away, so you almost always get 8192 (the buffer max size) bytes when you call
_ResponseStream.Read
while reading from the MSDN feed. But, when you are talking to a non-reliable server, say in Australia, you will not get 8192 bytes per read call all the time. So, you will end up making more
Response.OutputStream.Write
s than you should. So, a better and the final approach is to introduce another buffer which will hold the bytes being written to the ASP.NET
Response
and flush itself to
Respose.OutputStream
as soon as 8192 bytes are ready to be delivered. This intermediate buffer will ensure that always 8192 bytes are delivered to the
Response.OutputStream
.


Collapse
private int TransmitDataAsyncOptimized(HttpContext context, Stream readStream,
MemoryStream responseBuffer)
{
this._ResponseStream = readStream;
_PipeStream = new Utility.PipeStreamBlock(10000);
byte[] buffer = new byte[BUFFER_SIZE];
//Asynchronously read content form response stream
Thread readerThread = new Thread(new ThreadStart(this.ReadData));
readerThread.Start();
int totalBytesWritten = 0;
int dataReceived;
byte[] outputBuffer = new byte[BUFFER_SIZE];
int responseBufferPos = 0;
while ((dataReceived = this._PipeStream.Read(buffer, 0, BUFFER_SIZE)) > 0)
{
//if about to overflow, transmit the response buffer and restart
int bufferSpaceLeft = BUFFER_SIZE - responseBufferPos;
if (bufferSpaceLeft < dataReceived)
{
Buffer.BlockCopy(buffer, 0, outputBuffer,
responseBufferPos, bufferSpaceLeft);
context.Response.OutputStream.Write(outputBuffer, 0, BUFFER_SIZE);
responseBuffer.Write(outputBuffer, 0, BUFFER_SIZE);
totalBytesWritten += BUFFER_SIZE;
//Initialize response buffer
//and copy the bytes that were not sent
responseBufferPos = 0;
int bytesLeftOver = dataReceived - bufferSpaceLeft;
Buffer.BlockCopy(buffer, bufferSpaceLeft,
outputBuffer, 0, bytesLeftOver);
responseBufferPos = bytesLeftOver;
}
else
{
Buffer.BlockCopy(buffer, 0, outputBuffer,
responseBufferPos, dataReceived);
responseBufferPos += dataReceived;
}
}
//If some data left in the response buffer, send it
if (responseBufferPos > 0)
{
context.Response.OutputStream.Write(outputBuffer, 0, responseBufferPos);
responseBuffer.Write(outputBuffer, 0, responseBufferPos);
totalBytesWritten += responseBufferPos;
}
_PipeStream.Dispose();
return totalBytesWritten;
}

The above method ensures only 8192 bytes are written at a time to the ASP.NET Response Stream. This way, the total number of times the response is written is (total bytes read/8192).

Streaming proxy with asynchronous HTTP handler

Now that we are streaming the bytes, we need to make this proxy asynchronous so that it does not hold the main ASP.NET thread for too long. Being asynchronous means it will release the ASP.NET thread as soon as it makes a call to the external server. When the external server call completes and bytes are available for download, it will grab a thread from the ASP.NET thread pool and complete the execution.

When the proxy is not asynchronous, it keeps the ASP.NET thread busy until the entire connect and download operation completes. If the external server is slow to respond to, it's unnecessarily holding the ASP.NET thread for too long. As a result, if the proxy is getting too many requests to the slow server, ASP.NET threads will soon get exhausted and your server will stop responding to any new request. Users hitting any part of your website on that server will get no response from it. We had such a problem at Pageflakes. We were requesting data from a Stock Quote web service. The web service was taking more than 60 seconds to respond to the call. As we did not have asynchronous handlers back then, our Content Proxy was taking up all the ASP.NET threads from the thread pool and our site was not responding. We were restarting IIS every 10 minutes to get around this problem for a couple of days until the Stock Quote web service fixed itself.

Making an asynchronous HTTP handler is not so easy to understand.
This MSDN article
tries to explain it, but it's hard to understand the concept fully from this article. So, I have written an entire chapter on my book, "
Building a Web 2.0 portal using ASP.NET 3.5
", that explains how an asynchronous handler is built. From my experience, I have seen most people are confused when to use it. So, I have shown three specific scenarios where async handlers are useful. I also explained several other scalability issues with such a content proxy that you will find interesting to read. Especially, several ingenious hacking attempts to bring a website down by exploiting the Content Proxy and how to defend them.

The first step is to implement the
IHttpAsyncHandler
and break the
ProcessRequest
function's code into two parts -
BeginProcessRequest
and
EndProcessRequest
. The Begin method will make an asynchronous call to
HttpWebRequest.BeginGetResponse
and return the thread back to the ASP.NET thread pool.


Collapse
public IAsyncResult BeginProcessRequest(HttpContext context, AsyncCallback cb,
object extraData)
{
string url = context.Request["url"];
int cacheDuration = Convert.ToInt32(context.Request["cache"] ?? "0");
string contentType = context.Request["type"];
if (cacheDuration > 0)
{
if (context.Cache[url] != null)
{
//We have response to this URL already cached
SyncResult result = new SyncResult();
result.Context = context;
result.Content = context.Cache[url] as CachedContent;
return result;
}
}
HttpWebRequest request = HttpHelper.CreateScalableHttpWebRequest(url);
request.AutomaticDecompression = DecompressionMethods.None;
if (!string.IsNullOrEmpty(contentType))
request.ContentType = contentType;
AsyncState state = new AsyncState();
state.Context = context;
state.Url = url;
state.CacheDuration = cacheDuration;
state.Request = request;
return request.BeginGetResponse(cb, state);
}

When the
BeginGetResponse
call completes and the external server has started sending us the response bytes, ASP.NET calls the
EndProcessRequest
method. This method downloads the bytes from the external server and then delivers it back to the browser.


Collapse
public void EndProcessRequest(IAsyncResult result)
{
if (result.CompletedSynchronously)
{
//Content is already available in the cache
//and can be delivered from cache
SyncResult syncResult = result as SyncResult;
syncResult.Context.Response.ContentType =
syncResult.Content.ContentType;
syncResult.Context.Response.AppendHeader("Content-Encoding",
syncResult.Content.ContentEncoding);
syncResult.Context.Response.AppendHeader("Content-Length",
syncResult.Content.ContentLength);
syncResult.Content.Content.Seek(0, SeekOrigin.Begin);
syncResult.Content.Content.WriteTo(
syncResult.Context.Response.OutputStream);
}
else
{
//Content is not available in cache and needs to be
//downloaded from external source
AsyncState state = result.AsyncState as AsyncState;
state.Context.Response.Buffer = false;
HttpWebRequest request = state.Request;
using (HttpWebResponse response =
request.EndGetResponse(result) as HttpWebResponse)
{
this.DownloadData(request, response, state.Context, state.CacheDuration);
}
}
}

There you have it. A fast, scalable, continuous AJAX streaming proxy that can always outperform any regular AJAX proxy out their on the web.

In case you are wondering what are the
HttpHelper
,
AsyncState
, and
SyncResult
classes, they are some helper classes. Here's the code for these helper classes:


Collapse
public static class HttpHelper
{
public static HttpWebRequest CreateScalableHttpWebRequest(string url)
{
HttpWebRequest request = WebRequest.Create(url) as HttpWebRequest;
request.Headers.Add("Accept-Encoding", "gzip");
request.AutomaticDecompression = DecompressionMethods.GZip;
request.MaximumAutomaticRedirections = 2;
request.ReadWriteTimeout = 5000;
request.Timeout = 3000;
request.Accept = "*/*";
request.Headers.Add("Accept-Language", "en-US");
request.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US;" +
"rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6";
return request;
}
public static void CacheResponse(HttpContext context,
int durationInMinutes)
{
TimeSpan duration = TimeSpan.FromMinutes(durationInMinutes);
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetExpires(DateTime.Now.Add(duration));
context.Response.Cache.
AppendCacheExtension("must-revalidate, proxy-revalidate");
context.Response.Cache.SetMaxAge(duration);
}
public static void DoNotCacheResponse(HttpContext context)
{
context.Response.Cache.SetNoServerCaching();
context.Response.Cache.SetNoStore();
context.Response.Cache.SetMaxAge(TimeSpan.Zero);
context.Response.Cache.
AppendCacheExtension("must-revalidate, proxy-revalidate");
context.Response.Cache.SetExpires(DateTime.Now.AddYears(-1));
}
}
public class CachedContent
{
public string ContentType;
public string ContentEncoding;
public string ContentLength;
public MemoryStream Content;
}
public class AsyncState
{
public HttpContext Context;
public string Url;
public int CacheDuration;
public HttpWebRequest Request;
}
public class SyncResult : IAsyncResult
{
public CachedContent Content;
public HttpContext Context;
#region IAsyncResult Members
object IAsyncResult.AsyncState
{
get { return new object(); }
}
WaitHandle IAsyncResult.AsyncWaitHandle
{
get { return new ManualResetEvent(true); }
}
bool IAsyncResult.CompletedSynchronously
{
get { return true; }
}
bool IAsyncResult.IsCompleted
{
get { return true; }
}
#endregion
}

That's all folks.

Conclusion

Well, you have a faster and more scalable AJAX proxy than anyone else out there. So, feel really good about it :)

License

This article, along with any associated source code and files, is licensed under
The Code Project Open License (CPOL)

作者:
matt@mattberseth.com

作者:
matt@mattberseth.com
出处:
http://mattberseth.com/blog/2008/05/aspnet_ajax_progress_bar_contr.html

ASP.NET AJAX Progress Bar Control

If you use AJAX in your web app's, you no doubt have made use of some sort of progress/status indicator that lets the user know that some operation is currently executing.  In the app I am currently working on we use an animated gif for this.  It works great, but sometimes you might find it nice to have more control over the indicator - i.e. interacting with it via JavaScript and styling it using CSS.

image

So I did a little research and
found a nice example
of one built using
script.aculo.us
.  The demo page looked great so I downloaded the source to get a feel for how it worked.  I liked what I saw so I thought I would create a new
AjaxControlToolkit
control based on this example.  My original goal was just to port it over to ASP.NET, but as I started playing around with it I thought I might make a few changes to it as well.  So during the process of porting it, I made the following tweaks

  • I added a mode that runs the progress bar from 0 to 100 continuously.  This mode would be useful for scenarios where you don't know how long an operations would run for (like a typical partial postback)
  • The original requires different images for progress indicators of different widths.  I chose to use a repeating background image instead so I could use a single progress image no matter the width of the control.
  • I add an
    updating
    CSS class to the control while the progress bar is running.  In my demo page I use this to darken the percentage while the indicator is running.  I was also thinking about adding the current percentage to the class as well so you could have a custom style applied depending upon what the current percentage is.  Then you could do something like
    .progress .100 {  }
    to control the styling when the indicator is displaying 100%.
  • I used a skinning approach that is very similar to the Toolkit's Tab control.  I went ahead and created a bunch of sample skins (shown above) just to make sure my skinning technique worked alright.

Below are some details on how the controls - including how to add one to your page, interacting with it from JavaScript and creating custom skins using CSS.  Read on if you are interested and don't forget to check out the
live demo
and
download
.  I built it using .Net 3.5 and Toolkit version 3.5.66666619.0, but I think it could be ported back to .Net 2.0 without too many issues.

Live Demo (IE6, IE7, FF and Opera)
|
Download

Using the Control

The download contains plenty of examples of how to interact with the control, but here is some sample markup that specifies the progress mode as well as the width ...

   1:  <!-- Continuous Mode / 150px wide --> 
   2:  <mb:ProgressControl ID="ProgressControl1" runat="server" Mode="Continuous" Width="150px" />
   3:  <!-- Manual Mode / 70px wide --> 
   4:  <mb:ProgressControl ID="ProgressControl12" runat="server" Mode="Manual" Width="70px" /> 

When the control is in Continuous mode, you can start and stop the progress animation by using the
play()
and
stop()
JavaScript functions

   1:  //  start the indicator
   2:  $find('ProgressControl1').play();
   3:   
   4:  //  stop it
   5:  $find('ProgressControl1').stop();

And when the control is in Manual mode, you can use the set_percentage to manually change the percentage value.  You can either provide an absolute value like in the first example, or a value that is relative to what ever the current value is - like the second example.

   1:  //  set the percentage to 62
   2:  $find('ProgressControl1').set_percentage(62);
   3:   
   4:  //  increase the percentage by 15
   5:  $find('ProgressControl1').set_percentage('+15');

HTML Emitted by the Control

Below is the markup the control emits.  1 DIV for containing the progress image, 1 DIV for displaying the percentage text, 2 DIV's for applying a border and an outer DIV that wraps it all.

   1:  <div class="ajax__progress" class="ajax__progress" id="ProgressControl1">
   2:      <!-- outer and inner elements for creating a border -->
   3:      <div class="ajax__progress_outer" id="ProgressControl1_outer">
   4:          <div class="ajax__progress_inner" id="ProgressControl1_inner">
   5:              <!-- The background image for this element displays the indicator -->
   6:              <div class="ajax__progress_indicator" id="ProgressControl1_indicator" />
   7:          </div>
   8:      </div>
   9:      <!-- This element displays the percentage -->
  10:      <div class="ajax__progress_info" id="ProgressControl1_info">75%</div>
  11:  </div>

Skinning the Control

To skin the control, you need to set the CssClass property of the ProgressControl to the name of the CSS class that defines your custom skin.  For the skin portion of the demo page I have defined 6 custom themes.  Below is the sample markup for this section ...

   1:  <mb:ProgressControl ID="ProgressControl4" runat="server" CssClass="green" Mode="Manual" Width="200px" />            
   2:  <mb:ProgressControl ID="ProgressControl5" runat="server" CssClass="yelllow" Mode="Manual" Width="200px" />            
   3:  <mb:ProgressControl ID="ProgressControl6" runat="server" CssClass="orange" Mode="Manual" Width="200px" />            
   4:  <mb:ProgressControl ID="ProgressControl7" runat="server" CssClass="red" Mode="Manual" Width="200px" />            
   5:  <mb:ProgressControl ID="ProgressControl8" runat="server" CssClass="lightblue" Mode="Manual" Width="200px" />            
   6:  <mb:ProgressControl ID="ProgressControl11" runat="server" CssClass="solidblue" Mode="Manual" Width="200px" />            
 
 

And here are the CSS style rules that apply the styles for these skins

image

One of the sample skins I made is roughly based on the XP style progress indicator.  To create this custom skin, I first created the background image that I want to use for the indicator (I am using a 6 x 9 image)

image

then I use the .ajax__progress_indicator and .ajax__progress_inner classes to override the default skins height and progress image - Simple!

image

And here is how it looks ...

image

Screen shots of the Control's Features

Here are some static images that show off some of the control;s features ...

Continuous Mode

Progress indicator continuously fills the region from left to right.

image

Fluid Width

Progress indicator continuously fills the region from left to right.

image

Manual Mode - Update Absolute Percentage

Use the JavaScript API to set the percentage an absolute value

image

Manual Mode - Update Relative Percentage

Use the JavaScript API to set the percentage to a relative value

image

Skins

Use CSS to control the progress indicators look and feel

image

AJAX Operations

Example of displaying the indicator for AJAX operations

image

Modal Popup

An example using the progress control with the Tookit's ModalPopup control

image

That's it.  Enjoy!

出处:
http://blog.assembla.com/assemblablog/tabid/12618/bid/4996/default.aspx
相关文章: 《争论:是否应该避免架构重写?》

3 Options for Rebuilding Your Software Without Risking Death

Posted by Andy Singleton on Mon, Apr 28, 2008 @ 12:18 AM

Rebuilding your software to update the architecture is the riskiest development project you will ever dive into. Big, successful companies have been crushed by this task. The right tactics are essential. Recently, a more moderately sized company called me in to advise them on a much-needed rebuild, and we broke the tactical options down into three categories - "standard", "incremental", and "buy", each with its own advantages and disadvantages.

A personal story illustrates the risks involved. In 2000, I launched a rebuild of my PowerSteering product. There were all sorts of reasons to rebuild the product, the least important but the most motivating being that we wanted to make the product extensible in some exciting new ways. Over the long term, this turned out to be the right thing to do. We added a lot of new capabilities that customers appreciated. However, this was a crushing mistake for me personally. We were coming to market with this product in 2001, just as the big recession hit, and we were carrying the extra expenses of the product rebuild. We got some VC's involved, and they set to work with a well-oiled plan to fire me, strip me of the assets that I had invested in the company, and dilute me out of a meaningful shareholding. It was hard on my pregnant wife.

In this case, we had a rare controlled experiment. Someone else took the same code and went into an alternate universe without a rewrite. An entrepreneur had approached me to purchase full ownership of some project management code, so that he could turn it into a product for his startup. I sold him the OLD code for a nominal sum, with a stipulation that he should send me $200K if he ever sold it. He changed the front end to be more industry specific, but he never made any upgrades to the underlying code. A few years later, my (former) company got a check in the mail. It turns out he sold the company and the product for $6M.

Dharmesh Shah elaborates upon this lesson in
Why you should almost never rewrite your software
.

Why take the time and expense to rebuild software? Because after a while, it becomes harder and harder to do the things that you want to do. There is an ever-increasing amount of code that is structured incorrectly for the new demands you are placing on it. Eventually, it looks easier and faster to rebuild the software with an updated architecture, than to continue working with the old code.

Here are the three major options:

1) Standard Approach - Prototype and expand

The standard approach is to build a prototype of the new product, and then expand it into a complete application.

Advantages:
This approach has the advantage of being relatively cost efficient. During the prototyping phase, you can work with a very small team, or even your best individual architect. And, you can make significant changes to build the product you actually want.

Disadvantages:
The start of the project might be delayed by work on specification, to ensure that the important details of the old product make their way into the new product. And, it's risky. Once you commit to expanding the prototype into a complete application, you enter a potentially long period in which you are spending extra money on development, and you do not have an up-to-date product. In the worst case, you have a situation like the one faced by Microsoft when they release a new operating system, where the new product needs to do everything the old product did, or it will break customer installations. You could hold your breath for a long time waiting for this to happen.

2) Incremental

In the incremental approach, you replace big components of your software with more modern components, or you refactor the existing code. However, you do this in a series of steps that leave you with a releasable and saleable product after each step. This is often compared to "rebuilding the plane in the air".

Advantages:
The big payoff is that you have a much lower risk of ending up without an updated product. A more subtle advantage is that you don't need to do as much specification, if you are willing to say that the new product should do basically what the old product did. That saves you time.

Disadvantages:
Compared with building software from scratch, this is more complicated to do, and it takes longer. You are working with a larger codebase, and you have to figure out how to keep the plane flying while you rip off the engines.

3) Buy

You might be able to acquire the rights to something that does most of what you need to do (see my story). In an open source world, you might be able to take something off the shelf and adapt it. In fact, modern product rebuild plans almost always contain a significant component of buying or borrowing from open source. As the amount of software in the world grows, it's increasingly important to do research to find out what you can acquire or adopt.

Advantages:
Much faster, and probably cheaper.

Disadvantages:
Might not meet all requirements. Might place some restrictions on the ultimate value of the asset.

In this case, we decided on the "Incremental" option. The team was able to come up with an amazing plan in which every aspect of the architecture was replaced, but an existing application would still run correctly at each step. If you find yourself contemplating a rebuild, the reduction in risk from the incremental approach is so great that is worth applying your best brainpower to figuring out how to do it.

原始作者不明,出处不明。

当我第一次知道“重构”这个词时,直觉告诉我,这是一项非常重要的技术。因为程序员写代码,虽然越来越趋于工程化,但就程序本身,还是有艺术之美的存在的。曹雪芹先生写《红楼梦》,“批阅十载,增删五次”,留下了恢弘巨著;要写出一个比较优美经典的程序,同样需要精雕细琢,提高其质量,这就是重构。

我手头的这本《重构》,是Martin Fowler主笔的,另外有四位重构技术的专家级人物Kent Beck, John Brant, William Opdyke, Don Roberts也参与了最后几章的编写。这是一本与《设计模式》齐名的经典之作。Martin Fowler,除了是对象技术方面的专家外,还是UML和模式方面的专家。他撰写的Analysis Patterns、UML Distilled、Patterns of EntERPrise Application Architecture和Planning Extreme Programming几本书也广受赞誉。中国电力出版社出版的这本书,是由著名的侯捷先生和熊节先生翻译的。侯捷和熊节先生的翻译非常的到位,并保留了一些大家都能理解的、翻译了反而不通顺的英语单词(这好像是侯捷先生翻译的习惯),使正常水平的程序员阅读时毫无障碍。

Martin Fowler首先给出了重构的定义:在不改变代码外在行为的前提下,对代码做出修改,以改进程序的内部结构。本质上说,重构就是“在代码写好之后改进它的设计”。重构是近几年来才在软件工程领域推出的一个方法论,其最初起源于极限编程(Extreme Programming)过程方法中,但很快就因其完善系统的理论原则和其对软件开发所带来的极大的利处而被其它的软件开发过程所采用,并得到了软件开发人员的认可,在软件工业界也得到了良好的应用。本书是为专业程序员写的重构指南,介绍了重构的一般性原则,以及一些重要的重构方法和准则。有的人可能会问,按照软件工程的思想,应该是先设计,后编码。为什么在编码之后,还要改变其设计?答案很简单:先前的设计存在问题,或是可读性较差,或是效率较差。通常是两者皆有。要注意的是,我们这里讲的,是详细设计,整体的框架设计,不在“重构”的讨论范围之内。重构的每个步骤都很简单,给人“小打小闹”的感觉,但聚沙成塔,这些小的改动,累加起来,可以从根本上改善设计和代码的质量。

虽然这本书是用Java语言描述的,但我想,使用任何语言的程序员,都能读懂这本书和其中的代码,如果感觉实在有困难,读一本最简单的JAVA的入门书即可。Eclipse SDK包括了一个全功能的Java IDE,并且有强大的机制支持代码的重构;但重构绝对不是限于JAVA的。在本书的13章,是伊利诺斯大学(University of Illinois)的博士,William Opdyke写的,其中引用了一篇C++程序重构的论文。另外,据说微软新出的Visual Studio Whidbey也会附带有重构的工具。

程序员,尤其是资历比较浅的程序员,都会有惰性。也不太明白设计的重要性,通常以完成功能为目的,日积月累,软件危机就会产生。正常情况下,在水平提高后,程序员会对自己以前的代码非常看不顺眼,盼望能重新写好。可是,这是一项非常具有风险的活动。Martin Fowler的重构技术,它建立在完备的理论基础之上,并且有完善的方法学指导(包括小步迭代、频繁构建、测试优先等“敏捷联盟”倡导的实践),这使得它不再完全依赖于程序员的天赋。他在书中给出了具体的重构准则和严密的重构手法,使你能够在较低的风险下,重新设计,重写质量比较低劣的代码。

与重构紧密相连的另一个领域就是设计模式。虽然我们知道重构是为了使代码更优美,但怎样才算达到了重构的目标呢?设计模式为我们指明了方向。虽然设计模式是业界所认可的良好的软件设计结构,但不可能从软件开发的开始阶段就把软件按设计模式全面地进行设计,那样只会带来是过分设计,最终浪费了大量的时间却无法获得良好的效果。而重构恰恰可以看作是软件设计的一个修正品,它可以在软件开发的过程中不断的修改现有的程序结构(即设计),而使软件的设计朝着设计模式的方向开去。这也正是重构的目标之所在。Joshua Kerievsky在《Refactoring to Patterns》一书中这样描述重构和模式的关系:Patterns are a cornerstone of object-oriented design, while test-first programming and merciless refactoring are cornerstones of evolutionary design(模式是面向对象设计的基石,而测试优先编程和无情的重构则是设计演进的基石)。

有一点要注意的是:“不改变代码外在行为的前提下”,通俗的说,就是不能改变这段代码实现的功能。这是一个非常重要,又常会有人违反的原则。如果要增加或减少功能,请在重构完成之后!因为重构的目标之一,就是保持代码的功能完全不变。这是本书在最后一章,重构技术的顶尖大师,Kent Beck一再强调的。

讲到这里,可能大家慢慢能理解,重构,最关注的,就是提高软件的质量。提高软件的质量,同时也就提高了开发速度。有些人可能会不理解:要是边写代码,边重构,当然比直接写出完成功能的代码慢呀。可是请你想一想,软件开发中最占用时间的是什么?测试,对,当然是测试。在debug的时候,是对着一堆很差劲,很难懂的代码舒服,还是对着已经重构过,可读性明显提升的代码好?可能在debug的时候,你还是难以忍受那些拙劣的代码,还是会去重构的。所以,在一开始写的时候,带着“重构”的思想去写,或是在写了一小段时间后,回头看一下自己的代码,“勿以善小而不为”,那不会浪费你的时间,从整体上来说,一定会大大节省整个项目的开发时间。

记得读过一篇“葵花宝典:软件开发高手是这样炼成的!”,上面对重构有这样一段经典的话:“高手写软件总是不停地在重构(refactoring)。高手喜欢迭代式开发。高手说,增量就是打补丁,迭代就是推倒重来。对于软件这种东西,写一遍它可能ok(做到这一点也不容易),写十遍就是一个伟大的产品,再多写一遍它就更伟大些。”这就是高质量的软件产生的过程,其实一点都不难,只是看你是否愿意去做。我从一开始就说了,我们程序员的目标,不是拼凑一个能实现功能的垃圾软件,是创造一件伟大的艺术品,重构,是其间的必要步骤。

这本书是值得精读的,作为一个有经验的程序员,可能有部分是你已经知道并在开发过程中会注意的,别的部分,也会觉得很有道理而很容易的就记住了。所以,也许你读完一遍之后将不会再读第二遍,但却会时时刻刻想起它,因为它已经潜移默化了你的习惯。这让我想到了武侠书上说的“无招胜有招”,掌握了“有招”后,更深入的,就是“无招”,用来指导你写代码时的一切思想。正如作者所说,这些重构原则,不可能是全部,但如果你能由这些原则,悟到了如何写高质量代码的“道”,你就成为了一个真正的优秀的程序员了!