Hadoop-2.6.0 C API to achieve similar cloud disk functions

Source: Internet
Author: User

Hadoop-2.6.0 on the C API to implement a cloud-like function (upload. Download. Delete, rename)

Test system: CentOS6.6, hadoop-2.6.0

This test is called the C API under Hadoop to access HDFs to achieve similar cloud-based upload. Download, delete, rename the function, other features also please interested parties to join, nonsense less said. Start getting to the chase.

First, we're going to be able to access the C API on hadoop-2.6.0. HDFs

More information: http://blog.csdn.net/u013930856/article/details/47660937

Here's how to start our cloud disk features:

First we connect to our hadoopserver in the main function and create a user's own directory

int main (int argc, char **argv)
{
Char creatdirname[30];/* Create directories and Paths */
Char dirnamepath[50];
int Create;

Hdfsfs fs = Hdfsconnect ("10.25.100.130", 9000); Connect to Hadoopserver

printf ("Please enter the directory and path you want to create: \ n");
scanf ("%s", creatdirname);


Create = Hdfscreatedirectory (FS, creatdirname);

printf ("Create =%d\n", create);
if (Create = =-1)
{
printf ("Create failed!\n");
Exit (1);
}

while (1)
{
int num;

Hdfschosemenu_function ();
scanf ("%d", &num);
Switch (NUM)
{
Case 1:hdfssendfile_function (FS, creatdirname); HDFs Upload file function
Break
Case 2:hdfsdownfile_function (FS, creatdirname); Download file function
Break
Case 3:hdfsdelete_function (FS);Hdfsdelete_function
Break
Case 4:hdfsrename_function (FS);Hdfsrename_function
Break
Case 0:hdfsquit_function (FS);
Break
default:printf ("Please input ERROR!!! \ n ");
}
}
}

Uploading files to the server:

void Hdfssendfile_function (Hdfsfs FS, char creatdirname[])//hdfs upload file Function
{
Char sendfilename[30];FileName
Char sendfilepath[50];FilePath
Char Buffer[length];Bufferfile

printf ("Please enter the name of the file to be uploaded:");
scanf ("%s", sendfilename);
sprintf (Sendfilepath, "%s/%s", Creatdirname, Sendfilename);
Hdfsfile openfilename = Hdfsopenfile (FS, Sendfilepath, o_wronly| O_creat, 0, 0, 0);


FILE *FP = fopen (Sendfilename, "R");


if (NULL = = fp)
{
printf ("File:%s not found\n", sendfilename);
}
Else
{
Bzero (buffer, LENGTH);
Tsize length = 0;
while (length = fread (buffer, sizeof (char), length, FP)) > 0)
{
printf ("Length =%d\n", length);
Tsize num_written_bytes = Hdfswrite (FS, openfilename, buffer, length);
printf ("num_written_bytes =%d\n", num_written_bytes);
if (Hdfsflush (FS, openfilename))
{
fprintf (stderr, "Failed to ' flush '%s\n", Sendfilepath);
Exit (-1);
}
Bzero (buffer, LENGTH);
}
Fclose (FP);
Hdfsclosefile (FS, openfilename);
printf ("\n>>> uploaded file successfully!!! \ n ");
}
}

Download file:

void Hdfsdownfile_function (Hdfsfs FS, char creatdirname[])//download file Function
{
Char downfilename[30];Downfilename
Char downfilepath[50];Downfilepath
Char Buffer[length];Bufferfile

printf ("Please enter the file name to download:");
scanf ("%s", downfilename);

sprintf (Downfilepath, "%s/%s", Creatdirname, Downfilename);
Hdfsfile downopenfile = Hdfsopenfile (FS, Downfilepath, o_rdonly, 0, 0, 0);
if (NULL = = downopenfile)
{
printf ("Open file failed!\n");
Exit (1);
}
Else
{
FILE *FP = fopen (Downfilename, "w");
if (NULL = = fp)
{
printf ("file:\t%s Can not Open to write\n", downfilename);
Exit (1);
}
Else
{
Tsize d_length = 0;
while ((D_length = Hdfsread (FS, downopenfile, buffer, length)) > 0)
{
printf ("D_length =%d\n", d_length);


if (fwrite (buffer, sizeof (char), d_length, FP) < D_length)
{
printf ("file:\t%s Write failed\n", downfilename);
Break
}
Bzero (buffer, LENGTH);
}//sleep (1);
Fclose (FP);
Hdfsclosefile (FS, downopenfile);
printf ("\n>>> download file successfully!!! \ n ");
}
}
}

To delete a file:

void Hdfsdelete_function (Hdfsfs fs)//hdfsdelete_function
{
int num_delete;
Char delete_hdfsfilepath[50];

printf ("Please enter the name and path of the file you want to delete:");
scanf ("%s", Delete_hdfsfilepath);
Num_delete = Hdfsdelete (FS, Delete_hdfsfilepath, 0);
printf ("Num_delete =%d\n", num_delete);

}

File Rename:

void Hdfsrename_function (Hdfsfs FS) Hdfsrename_function
{
int num_rename;
Char hdfsfilepath[30] = {0};
Char oldhdfsfilename[30] = {0};
Char newhdfsfilename[30] = {0};
Char oldhdfsfilepath[50] = {0};
Char newhdfsfilepath[50] = {0};

printf ("Please enter the file path and file name you want to change:"); Middle space separated by Example:/xiaodai 1.jpg
scanf ("%s%s", Hdfsfilepath, Oldhdfsfilename);
printf ("Please enter the changed file name:");
scanf ("%s", newhdfsfilename);

sprintf (Oldhdfsfilepath, "%s/%s", Hdfsfilepath, Oldhdfsfilename);
sprintf (Newhdfsfilepath, "%s/%s", Hdfsfilepath, Newhdfsfilename);

Num_rename = Hdfsrename (FS, Oldhdfsfilepath, Newhdfsfilepath);
printf ("Num_rename =%d\n", num_rename);
}

This is just a simple implementation of its functionality. Assuming that you want to continue to join a lot of others more, ask the developers to continue to

This is just the function core code that implements its function, and its complete code and operating documentation are detailed below:

http://download.csdn.net/detail/u013930856/9012061

Hadoop-2.6.0 C API to achieve similar cloud disk functions

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.