Mapreduce:program of Matrix multiplication

Source: Internet
Author: User
Tags arithmetic shuffle split static class

1.Matrix multiplication ' s basic principle: (1) Sum the result of matrix A ' s row multiplying Matrix B ' s column as The new Matrix. (2) The matrix A ' s row subscript become the new matrix ' s row subscript and the matrix B ' s column subscript become the new Matrix ' s column.

2.When We code , how do we think deeply the basic principle:

(1) The factor ' s row subscript in matrix A must is equal to the factor's column subscript in Matrix B. (2) matrix A's all Row factor must is multiply with Matrix B's all column factor.

(3) in the MapReduce Parallel programming framework, the same key value would be shuffle in a same list passing to reduce.

(4) in Reduce, Sum the result of the matrix A ' s row multiplying matrix B ' s column as the new matrix, and write the SU m to output.

So, we need to design the arithmetic so as to achieve matrixmultiplication.

(1) First, we need to construt the "key in map" order to shuffle matrix A's Row factor and matrix B ' s columnfactor, which Participate in the multiplication, in a same reduce input value list.

(2) In the value list, we need another factor of the value in map to control suming The multiplication ' s result.

3.Design the arithmetic:

Pminj = (A * B) Minj = SUM (Ami MJ * BMJ NJ);

In the Map,the Key and the Value is designed as this:

Matrix A:

Key: [Mi, NJ]

Value: [A, MJ, Ami MJ] Matrix B:

Key: [Mi, NJ]

Value: [A, MJ, Ami MJ]

4.The Program:

Package Com.catchingsun.matrix;
Import org.apache.hadoop.conf.Configuration;
Import Org.apache.hadoop.fs.Path;
Import Org.apache.hadoop.io.Text;
Import Org.apache.hadoop.mapreduce.lib.input.FileSplit;
Import Org.apache.hadoop.mapreduce.Job;
Import Org.apache.hadoop.mapreduce.Mapper;
Import Org.apache.hadoop.mapreduce.Reducer;
Import Org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
Import Org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
Import Org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

Import Org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;

Import java.io.IOException;
    public class Matrixmultiply {private static final int columnn = 3;
    private static final int rowm = 5;

    private static final int columnm = 6; 
        public static class Matrixmap extends Mapper<object, text, text, text > {private Text map_key = new text ();

        Private text Map_value = new text ();
        private static int mmi = 0; private static inT ni = 0; public void Map (Object key, Text value, Context context) throws IOException, interruptedexception {filesplit
            Filesplit = (filesplit) context.getinputsplit ();
            String fileName = Filesplit.getpath (). GetName ();
            int MJ = 0;
                if (Filename.contains ("M")) {mmi++;
                string[] tuple = value.tostring (). Split (",");
                    for (String s:tuple) {mj++;
                        for (int k = 1; k < columnn + 1; k++) {Map_key.set (mmi + "," + K);
                        Map_value.set ("M" + "," + MJ + "," + S ");
                    Context.write (Map_key, Map_value);
                }}} and Else if (Filename.contains ("N")) {ni++;
                int NJ = 0;
                string[] tuple = value.tostring (). Split (",");
                    for (String s:tuple) {nj++; For(int i = 1; i < ROWM + 1; i++)
                        {text str = new Text ();
                        Map_key.set (i + "," + NJ);
                        Map_value.set ("N" + "," + ni + "," + S ");
                    Context.write (Map_key, Map_value); }}}}} public static class Matrixreduce extends Reducer<text, Text, Tex
        T, text> {private int sum = 0;
        private int m[] = new INT[COLUMNM + 1];

        private int n[] = new INT[COLUMNM + 1];
            public void reduce (Text key, iterable<text> values, context context) throws IOException, interruptedexception{
                for (Text val:values) {String [] tuple = val.tostring (). Split (",");
                if (Tuple[0].equals ("M")) {M[integer.parseint (tuple[1])] = Integer.parseint (tuple[2]);
            }else{N[integer.parseint (tuple[1]) = Integer.parseint (tuple[2]);    }} for (int i = 1; i < COLUMNM + 1; i + +) {sum + = m[i] * N[i];
            } context.write (Key, New Text (integer.tostring (sum)));
        sum = 0;

        }} public static void Main (string[] args) throws Exception {Configuration conf = new configuration ();
        Job Job = new Job (conf, "matrixmultiply");
        Job.setjarbyclass (Matrixmultiply.class);
        Job.setoutputkeyclass (Text.class);

        Job.setoutputvalueclass (Text.class);
        Job.setmapperclass (MatrixMultiply.MatrixMap.class);

        Job.setreducerclass (MatrixMultiply.MatrixReduce.class);
        Job.setinputformatclass (Textinputformat.class);

        Job.setoutputformatclass (Textoutputformat.class);
        Fileinputformat.addinputpath (Job, New Path (Args[0]));
        Fileoutputformat.setoutputpath (Job, New Path (Args[1]));
    Job.waitforcompletion (TRUE); }
}


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.