Python pandas usage Daquan, pythonpandas Daquan
1. Generate a data table
1. Import the pandas database first. Generally, the numpy database is used. Therefore, import the database first:
import numpy as npimport pandas as pd
2. Import CSV or xlsx files:
df = pd.DataFrame(pd.read_csv('name.csv',header=1))df = pd.DataFrame(pd.read_excel('name.xlsx'))
3. Create a data table with pandas:
"}-includedeletedobjects | Restore-adobjectMethod Two: Use the Lap.exe mode to turn on the Recycle Bin function and restore the accountConnection650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M02/85/DF/wKiom1etN2-RivWcAABEKo2gQbE255.jpg-wh_500x0-wm_3 -wmp_4-s_3456700314.jpg "style=" Float:none; "title=" 1.jpg "alt=" Wkiom1etn2-rivwcaabeko2gqbe255.jpg-wh_50 "/>Binding650) this.width=650; "Src=" Http://s5.51cto.com/wyfs02/M02/85/
array ([[3, 4], [6, 7])
After understanding the step slice, the two-dimensional and three-dimensional aspects have the same good understanding, and are not as complicated as the step.You can also copy the sliced elements.
>>> B [1:,: 2] = 1 # broadcast assignment >>> barray ([[0, 1, 2], [1, 1, 5], [1, 1, 8]) >>> B [1:,: 2]. shape (2L, 2L) >>> B [1:,: 2] = np. arange (2, 6 ). reshape () # corresponding value >>> barray ([[0, 1, 2], [2, 3, 5], [4, 5, 8])
Similarly, sequence [start1: end1, start2:
factors, some groups such as women, children and the upper class are more likely to survive. In this question, we want you to analyze who is more likely to survive.Know that women and children have priority through prior knowledge of books, movies, etc. The same training data can be used to calculate the survival rate of women.#!/usr/bin/env python#coding:utf-8 "Created on November 25, 2014 @author:zhaohf" ' Import pandas as Pddf = Pd.read_csv (' ... /data/train.csv ', header=0) female_tourist
B3e1ff d6 1a 9e C2 C9 E7 4e ad f4 5f E3 E9 9 D7 E8 F9 f4 D2 E8 22 D7e218 be E2 CE (CF) 4a A8 AE 3f c6 2d e9 34 E6 E0 65e231 E2 8a 5f ce 3 5f bf eb 7 AE d0 ca 5 8d a5 b7 f0e24a BC 7a BC DC 5 db C0 6a 2 e5 to CB AC BA A5 3b 9d F1 77 38e263 a6 D1 3c af D8 6a A2 c0 5e BB c2 notoginseng F2 2be27b EC 4a 8c 4c F2 F7 A9 2b Ba 6d d6 6a e5 e0 da e5 b9 e5e293 F7 7f F0 c4 4e 3c A3 ee 4e 8f A-EB db ad 7a 9c FE2AC DB 5a da E3 AE 2d 5e ea DF 6 m ce D1 2e 6dE2C5 C1
conversions
CSV Data Set Read
Structured data file reads
HDF5 Read
JSON data Set Read
Excel reads
Hive Table Read
External database Read
Index indexes
Automatically created
There are no index indexes and you need to create additional columns if needed
Row structure
Series structure, belonging to the pandas Dataframe structure
Row structure, which belongs to the spark dataframe structure
conversions
CSV Data Set Read
Structured data file reads
HDF5 Read
JSON data Set Read
Excel reads
Hive Table Read
External database Read
Index indexes
Automatically created
There are no index indexes and you need to create additional columns if needed
Row structure
Series structure, belonging to the pandas Dataframe structure
Row structure, which belongs to the spark dataframe structure
have the following advantages:
Faster (once set)
Self-explanation (by checking the code, you will know what it has done)
Easy to generate reports or emails
More flexible, because you can define custom Aggregate functions
Read in the data
First, let's build the required environment.
If you want to continue with me, you can download this Excel file.
Import pandas as pd
Import numpy as np
Version reminder
Because the effect_table API has changed over time, to make the sample code
#-*-Coding:utf-8-*-# The Nineth chapter of Python for data analysis# Data aggregation and grouping operationsImport Pandas as PDImport NumPy as NPImport time# Group operation Process, Split-apply-combine# Split App MergeStart = Time.time ()Np.random.seed (10)# 1, GroupBy technology# 1.1, citationsDF = PD. DataFrame ({' Key1 ': [' A ', ' B ', ' A ', ' B ', ' a '],' Key2 ': [' one ', ' one ', ' one ', ' one ', ' one ',' Data1 ': Np.random.randint (1, 10, 5),' Data2 ': Np.random.randn (5)})Print (
BC 7a BC DC 5 db C0 6a 2 e5 to CB AC BA A5 3b 9d F1 77 38e263 a6 D1 3c af D8 6a A2 c0 5e BB c2 notoginseng F2 2be27b EC 4a 8c 4c F2 F7 A9 2b Ba 6d d6 6a e5 e0 da e5 b9 e5e293 F7 7f F0 c4 4e 3c A3 ee 4e 8f A-EB db ad 7a 9c FE2AC DB 5a da E3 AE 2d 5e ea DF 6 m ce D1 2e 6dE2C5 C1 9c D8 6a 9b 4a E8 d6 5 ff d4 1b AC 1f (BF 55)E2de b A4 E1 5d 5e Ed (6c) FB Bayi e F9 bd d4 f4 8b de 1dE2f7 BA D (3 4b 3e DC d0 1c D1 A6 2b 4e 8d 8f d4E310 2 E1 6b 1a (EA) C0 CF
2018.03.26 common Python-Pandas string methods,
Import numpy as npImport pandas as pd1 # common string method-strip 2 s = pd. series (['jack', 'jill', 'jease ', 'feank']) 3 df = pd. dataFrame (np. random. randn (3, 2), columns = ['column A', 'column B '], index = range (3) 4 print (s) 5 print (df. columns) 6 7 print ('----') 8 print (s. str. lstrip (). values) # Remove the space 9 print (s. str. rstrip ().
operand is transferred to the specified register, and the high address is transferred to the ES register.LDCK: Command Mnemonic--block bus. A single-byte prefix instruction added to the front of any instruction, which is used to maintain the latch signal of the bus until the instruction with which it is mated is executed.Local: Macro directives--local symbols (variables, labels) are defined. When the macro is expanded, the assembler assigns a special symbol to the form parameter after the local
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.