to get each line:For line in File_object:Process Line3. write file write text file output = open (' Data ', ' W ')Write binary file output = open (' Data ', ' WB ')Append Write file output = open (' Data ', ' w+ ')Write Data File_object = open (' Thefile.txt ', ' W ')File_object.write (All_the_text)File_object.close ()Write multiple lines file_object.writelines (list_of_text_strings)Note that calling Writelines writes multiple rows at a higher performance than using write one-time writes.When p
File_object:Process Line3. Writing filesWrite a text fileOutput = open (' Data ', ' W ')Write a binary fileOutput = open (' Data ', ' WB ')Append Write fileOutput = open (' Data ', ' w+ ')Write DataFile_object = open (' Thefile.txt ', ' W ')File_object.write (All_the_text)File_object.close ()Write Multiple linesFile_object.writelines (list_of_text_strings)Note that calling Writelines writes multiple rows at a higher performance than using write one-time writes.When processing
. Writing filesWrite a text fileOutput = open (' Data ', ' W ') Write a binary fileOutput = open (' Data ', ' WB ') Append Write fileOutput = open (' Data ', ' w+ ') Write DataFile_object = open (' Thefile.txt ', ' W ')File_object.write (All_the_text)File_object.close () Write Multiple linesFile_object.writelines (list_of_text_strings) Note that calling Writelines writes multiple rows at a higher performance than using write one-time writes. When processing
The online log file status is active or current indicates that the data modifications that the log contains have not been fully synchronized to the data file, and that the redo record in which the instance will need to be read repeats itself, so if it is corrupted, data loss is unavoidable.
1) Simulate disaster
First look at the
filesWrite a text fileOutput = open (' Data ', ' W ')Write a binary fileOutput = open (' Data ', ' WB ')Append Write fileOutput = open (' Data ', ' w+ ')Write DataFile_object = open (' Thefile.txt ', ' W ')File_object.write (All_the_text)File_object.close ()Write Multiple linesFile_object.writelines (list_of_text_strings)Note that calling Writelines writes multiple rows at a higher performance than using write one-time writes.When processing log
filesWrite a text fileOutput = open (' Data ', ' W ')Write a binary fileOutput = open (' Data ', ' WB ')Append Write fileOutput = open (' Data ', ' w+ ')Write DataFile_object = open (' Thefile.txt ', ' W ')File_object.write (All_the_text)File_object.close ()Write Multiple linesFile_object.writelines (list_of_text_strings)Note that calling Writelines writes multiple rows at a higher performance than using write one-time writes.When processing log
Label:Very large files We use normal file read way are very slow, and in Java provides me with the Randomaccessfile function, can quickly read large files and do not feel the card oh, below to see a demo instance of me. Server log files
';
However, I cannot read the variable value of the file introduced in the function.
Function admin_log ($ sn = '', $ action, $ content)
{
// Log operations and actions
Import ("Common. logAction", APP_PATH, '. php ');
Echo $ GLOBALS ['_ LANG'] ['log _ action'] [$ action]; exit;
}
Please give me some suggestions...
------ Solution --------------------
Bro
This article mainly for you in detail the PHP read large files of various methods, interested friends can refer to
Reading large files has always been a headache, we like to use PHP development to read small files can directly use a variety of function implementation, but o
.
Solution 3: Perform random read/write on the operated files to reduce the possibility of concurrency.
When recording user access logs, this solution seems to be widely used. Previously, we needed to define a random space. the larger the space, the lower the possibility of concurrency. Here we assume that the random read/write space is [1-500], then the distribu
/var includes data to be changed while the system is running. This includes each system that is specific, that is, a directory that cannot be shared with other computers, such as/var/log,/var/lock,/var/run. Some directories can still be shared with other systems, such as/var/mail,/var/cache/man,/var/cache/fonts,/var/spool/news. The purpose of the Var directory is to extract files or temporary
1. Some functions related to file operations
(1) ngx_file_info
Macro definition: # define ngx_file_info (file, Sb) Stat (const car *) file, Sb)
Stat function: specific usage see http://wenku.baidu.com/view/31777dc1d5bbfd0a795673b1.html
(2) ngx_open_file (name, mode, create, access)
Macro definition: # define ngx_open_file (name, mode, create, access) Open (const char *) Name, mode | create, access );
OPEN function: http://baike.baidu.com/view/26337.htm
2. How does nginx
the operated files to reduce the possibility of concurrency.When recording user access logs, this solution seems to be widely used. Previously, we needed to define a random space. the larger the space, the lower the possibility of concurrency. Here we assume that the random read/write space is [1-500], then the distribution of our log
Reading large files has always been a headache for the problem, we like using PHP to read small files can be used directly to implement a variety of functions, but one to large articles will find that the common method is not normal or too long too much card, let's take a look at the PHP read about large file problem s
that the random read/write space is [1-500], then the distribution of our log files is log1 ~ To log500. Data is randomly written to log1 for each user access ~ Any file between log500. At the same time, there are two processes that record logs. process A may be an updated log32 file, but what about process B? At this time, the update may be log399. you need to
access logs, this solution seems to be widely used. Previously, we needed to define a random space. the larger the space, the lower the possibility of concurrency. Here we assume that the random read/write space is [1-500], then the distribution of our log files is log1 ~ To log500. Data is randomly written to log1 for each user access ~ Any file between log500.
This article mainly introduces several methods of reading large files based on PHP, there are 3 main methods. Interested friends can refer to a bit.
Reading large files has always been a headache, we like to use PHP development to read small files can directly use a variety of function implementation, but one to large
Oracle HowTo: how to change the location of Oracle data files in Oracle Database in read-only mode. There are multiple ways to move the location of data files. I have introduced several methods before: Oracle HowTo: in non-archive mode, how does one change the data file location? Oracle HowTo: How does one move the data file location? The preceding two methods ma
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.