The first step is to download latexstudio This integrated environment, follow the prompts to install.
Second Step write Tex file
The code is as follows:
\documentclass[journal,onecolumn]{ieeetran}\usepackage{amsmath,graphicx}\usepackage{cjk}\usepackage{algorithm} %//format of the algorithm\usepackage{algorithmic}%//format of the algorithm\usepackage{ctex}% correct bad hyphenation h Ere\hyphenation{op-tical Net-works semi-conduc-tor}%% redefine algorithm package require text display \renewcommand{\algorithmicrequire}{\ Textbf{input:}}\renewcommand{\algorithmicensure}{\textbf{output:}}\newcommand{\upcite}[1]{\textsuperscript{\ textsuperscript{\cite{#1}}}}\begin{document}\title{Speech endpoint detection algorithm \upcite{texton}}\author{yellow sir}% the paper headers%\ Markboth{journal of \latex\ Class files,~vol.~6, no.~1, January~2007}%%{shell \makelowercase{\textit{et al.}}: Bare Demo of ieeetran.cls for Journals}\maketitle\hfill \today\begin{abstract}\boldmaththe abstract goes here.\end{abstract}%\ Begin{ieeekeywords}%ieeetran, Journal, \latex, paper, template.%\ end{ieeekeywords}\section{test algorithm flow}\BEGIN{ALGORITHM}[HTB]% algorithm start \caption{test algorithm flow.}% algorithm title \label{alg:segmentation}% Give the algorithm a label so that it is convenient to reference the algorithm in the text \begin{aLGORITHMIC}[1]% This 1 means that each row shows the input parameters of the digital \require% algorithm: input~~\\ training result of shape filter obtained by the boost algorithm \ Test Image i\\ category set \ensure ~~\ Output of the algorithm: output\\ test the category of each point belonging to the image \upcite{16bitmcuspeech}\state \textbf{calculates the response of the shape filter}\\ to each pixel of a picture, calculates to 700 shape Filter response (Each shape filter produces an output value for each category!) \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 700*21 21 Algorithmic}\end{algorithm}%\subsection{subsection Heading here}%\subsubsection{subsubsection Heading Here}% Subsubsection text Here.\bibliographystyle{ieeetran}\bibliography{mybib}\end{document}
The third step, the production of bib files, this part can be combined with endnote to do, specific steps to refer to the online
@article {Texton, author = {Shotton, Jamie and Winn, John and Rother, Carsten and Criminisi, Antonio}, affiliation = { University of Cambridge machine Intelligence Laboratory Trumpington Street Cambridge CB2 1PZ UK}, title = {Textonboost F or Image understanding:multi-class Object recognition and segmentation by jointly Modeling Texture, Layout, and Context}, Journal = {International Journal of Computer Vision}, publisher = {Springer Netherlands}, ISSN = {0920-5691}, key Word = {Computer science}, pages = {2-23}, volume = {Bayi}, issue = {1}, url = {http://dx.doi.org/10.1007/s11263-007 -0109-1}, note = {10.1007/s11263-007-0109-1}, Year = {{}}} @article {16bitmcuspeech, language = {English},copyright = {Compilation and indexing terms, Copyright Elsevier inc.},copyright = {Compendex},title = {Making machines understand US in reverberant rooms:robustness against reverberation for automatic speech recognition},journal = {IEEE Signal proces Sing Magazine},author ={Yoshioka, Takuya and Sehr, Armin and Delcroix, Marc and Kinoshita, Keisuke and Maas, Roland and Nakatani, Tomohiro and Ke Llermann, Walter},volume = {29},number = {6},year = {2012},pages = {114-126},issn = {10535888},address = {445 Hoes Lane /P.O. Box 1331, Piscataway, NJ 08855-1331, and states},abstract = {Speech Recognition technology have left the Laboratory and is increasingly coming to practical use, enabling a wide spectrum of innovative and exciting voice-drive n applications that is radically changing our accessing digital services and information. Most of today's applications still require a microphone located near the talker. However, almost all of these applications would benefit from distant-talking speech capturing, where talkers is able to s Peak at some distance from the microphones without the encumbrance of handheld or body-worn equipment [1]. For example, applications such as meeting speech recognition, automatic annotation of consumer-generated Videos, Speech-to-speech translation in teleconferencing, and Hands-free interfaces for controlling Consumer-products, Li Ke Interactive TV, would greatly benefit from distant-talking operation. Furthermore, for a number of unexplored but important applications, distant microphones is a prerequisite. This means, distant talking speech recognition technology are essential for extending the availability of speech Recogn Izers as well as enhancing the convenience of existing speech recognition applications. © Ieee.},key = {Reverberation},keywords = {Information services; microphones; Laboratories; Speech recognition;},note = {Automatic annotation; Automatic Speech recognition;digital Services; handhelds; Hands-free;interactive TV; reverberant; Speech recognition technology; Speech recognizer; Speech-to-speech translation; Wide Spectrum;},url = {http://dx.doi.org/10.1109/MSP.2012.2205029},}
Attachments, this file may cause your latex to fail to compile because of a missing ieeetran.cls.
This file is in Ieeetran.cls.
Template.tex,mybib.bib,ieeetran.cls These three files are placed in a folder and can be compiled and produced in PDF. As follows:
Latex Writing algorithm notes and managing references