December 2015, XX project need to do a data export function, at that time all the functions of the page everywhere has been implemented, but there is a large amount of page data, export the process of exporting the page directly to die. I have to choose to use ADO to re-complete this function, because the more inclined to the bottom of the logic, I choose to read directly from the SqlCommand data, every 20,000 data write a file, to avoid writing too many pages at once to die, the final test can export 25G of data to meet the system requirements.
usingSystem;usingSystem.Data;usingSystem.Data.SqlClient; Public classado_net{ PublicActionResult ExportData () {stringSabsolutepath ="XXX"; stringFileName =string. Format ("Aq_{0}.csv", DateTime.Now.ToString ("YYYYMMDDHHMMSS")); Try{Fincapdbcontext db=Dbcontextfactory.getcurrentcontext (); SqlConnection Conn=NewSqlConnection (db. currentconnectionstring); Conn. Open (); SqlCommand cmd=NewSqlCommand ("SQL Statement", conn); Cmd.commandtimeout= -; SqlDataReader SDR=cmd. ExecuteReader (); StreamWriter SW=NewStreamWriter (Sabsolutepath +"\\"+ FileName,false, Encoding.GetEncoding ("GB2312")); StringBuilder SB=NewStringBuilder (); intK =0; for(intm =0; M < SDR. FieldCount; m++) {sb. Append (SDR). GetName (m)+","); } sb. Append (Environment.NewLine); while(SDR). Read ()) {k++; for(intn =0; N < SDR. FieldCount; n++) {sb. Append (Sdr[n]+","); } sb. Append (Environment.NewLine); if(k >20000) {k=0; Sw. Write (sb.) ToString ()); Sb. Length=0; } } if(k <=20000) {SW. Write (sb.) ToString ()); } SW. Flush (); Sw. Close (); Conn. Close (); returnFile (Sabsolutepath +"\\"+ FileName,"application/zip-x-compressed", FileName); } Catch { returnFile ("XXX"); } }}
View Code
Export data Using ADO