[Erlang_question21] Application of Erlang performance analysis tool eprof fpdorf

Source: Internet
Author: User
Some time ago, when the project code was changed suddenly, the CPU usage fluctuated greatly. After a long time, I did not find the cause. I had to turn to the Performance Testing Tool: <Erlang Program Design> ---- Joe armstorng [Haha, the first man on the moon was Armstrong] p416
Cprof is used to test the number of calls to each function. This tool brings 5% ~ to the system for lightweight systems to use this tool ~ 10% of the additional load fprof shows the function call and the called buried tickets, and outputs the results to a file, this tool is suitable for the first period of large-scale performance in an experimental or simulated environment, which brings significant system load. eprof analyzes the main time consumption in an Erlang program. It is actually the predecessor of fprof. The difference is: This is suitable for pre-Performance Evaluation of smaller scale.
So I wrote the encapsulation for ease of use: You can use seek_proc_info: Help () to view the role of a specific function: You can use this as an example of cprof and fprof.
2> seek_proc_info:help().Brief help:seek_proc_info:queue(N) - show top N pids sorted by queue lengthseek_proc_info:memory(N) - show top N pids sorted by memory usageseek_proc_info:reds(N) - show top N pids sorted by reductionsErlang shell with Ctrl+Cseek_proc_info:eprof_start() - start eprof on all available pids; DO NOT use on production system!seek_proc_info:eprof_stop() - stop eprof and print resultseek_proc_info:fprof_start() - start fprof on all available pids; DO NOT use on production system!seek_proc_info:fprof_stop() - stop eprof and print formatted resultseek_proc_info:fprof_start(N) - start and run fprof for N seconds; use seek_proc_info:fprof_analyze() to analyze collected statistics and print formatted result; use on production system with CAREseek_proc_info:fprof_analyze() - analyze previously collected statistics using seek_proc_info:fprof_start(N) and print formatted resultseek_proc_info:help() - print this helpok
You can only add the application you want to monitor to the macro apps to view the process message, memory, and operation status. You can modify the source code below to view other information about the process.
%%%-------------------------------------------------------------------%%% @author [email protected]%%% @doc%%%   A interface for eprof and fprof%%%   call ?MODULE:help() to see%%% @end%%% Created : 03. Sep 2014 3:09 PM%%%--------------------------------------------------------------------module(seek_proc_info).%% API-export([eprof_start/0, eprof_stop/0,  fprof_start/0, fprof_start/1,  fprof_stop/0, fprof_analyze/0,  queue/1, memory/1,  reds/1, help/0]).-define(APPS, [kernel,mnesia]).%%====================================================================%% API%%====================================================================help() ->  io:format("Brief help:~n"  "~p:queue(N) - show top N pids sorted by queue length~n"  "~p:memory(N) - show top N pids sorted by memory usage~n"  "~p:reds(N) - show top N pids sorted by reductions~n"  "Erlang shell with Ctrl+C~n"  "~p:eprof_start() - start eprof on all available pids; "  "DO NOT use on production system!~n"  "~p:eprof_stop() - stop eprof and print result~n"  "~p:fprof_start() - start fprof on all available pids; "  "DO NOT use on production system!~n"  "~p:fprof_stop() - stop eprof and print formatted result~n"  "~p:fprof_start(N) - start and run fprof for N seconds; "  "use ~p:fprof_analyze() to analyze collected statistics and "  "print formatted result; use on production system with CARE~n"  "~p:fprof_analyze() - analyze previously collected statistics "  "using ~p:fprof_start(N) and print formatted result~n"  "~p:help() - print this help~n",    lists:duplicate(12, ?MODULE)).eprof_start() ->  eprof:start(),  case lists:keyfind(running, 1, application:info()) of    {_, Apps} ->      case get_procs(?APPS, Apps) of        [] ->          {error, no_procs_found};        Procs ->          eprof:start_profiling(Procs)      end;    _ ->      {error, no_app_info}  end.fprof_start() ->  fprof_start(0).fprof_start(Duration) ->  case lists:keyfind(running, 1, application:info()) of    {_, Apps} ->      case get_procs(?APPS, Apps) of        [] ->          {error, no_procs_found};        Procs ->          fprof:trace([start, {procs, Procs}]),          io:format("Profiling started~n"),          if Duration > 0 ->            timer:sleep(Duration*1000),            fprof:trace([stop]),            fprof:stop();            true->              ok          end      end;    _ ->      {error, no_app_info}  end.fprof_stop() ->  fprof:trace([stop]),  fprof:profile(),  fprof:analyse([totals, no_details, {sort, own},    no_callers, {dest, "fprof.analysis"}]),  fprof:stop(),  format_fprof_analyze().fprof_analyze() ->  fprof_stop().eprof_stop() ->  eprof:stop_profiling(),  eprof:analyze().queue(N) ->  dump(N, lists:reverse(lists:ukeysort(1, all_pids(queue)))).memory(N) ->  dump(N, lists:reverse(lists:ukeysort(3, all_pids(memory)))).reds(N) ->  dump(N, lists:reverse(lists:ukeysort(4, all_pids(reductions)))).%%====================================================================%% Internal functions%%====================================================================get_procs(Apps, AppList) ->  io:format("Searching for processes to profile...~n", []),  Procs = lists:foldl(    fun({App, Leader},Acc) when is_pid(Leader) ->      case lists:member(App, Apps) of        true ->          [get_procs2(Leader)|Acc];        false ->          Acc      end;      (_,Acc) ->        Acc    end,[], AppList),  io:format("Found ~p processes~n", [length(Procs)]),  Procs.get_procs2(Leader) ->  lists:filter(    fun(Pid) ->      case process_info(Pid, group_leader) of        {_, Leader} ->          true;        _ ->          false      end    end, processes()).format_fprof_analyze() ->  case file:consult("fprof.analysis") of    {ok, [_, [{totals, _, _, TotalOWN}] | Rest]} ->      OWNs =        lists:flatmap(        fun({MFA, _, _, OWN}) ->          Percent = OWN*100/TotalOWN,          case round(Percent) of            0 -> [];            _ -> [{mfa_to_list(MFA), Percent}]          end        end, Rest),      ACCs = collect_accs(Rest),      MaxACC = find_max(ACCs),      MaxOWN = find_max(OWNs),      io:format("=== Sorted by OWN:~n"),      lists:foreach(        fun({MFA, Per}) ->          L = length(MFA),          S = lists:duplicate(MaxOWN - L + 2, $ ),          io:format("~s~s~.2f%~n", [MFA, S, Per])        end, lists:reverse(lists:keysort(2, OWNs))),      io:format("~n=== Sorted by ACC:~n"),      lists:foreach(        fun({MFA, Per}) ->          L = length(MFA),          S = lists:duplicate(MaxACC - L + 2, $ ),          io:format("~s~s~.2f%~n", [MFA, S, Per])        end, lists:reverse(lists:keysort(2, ACCs)));    Err ->      Err  end.mfa_to_list({M, F, A}) ->  atom_to_list(M) ++ ":" ++ atom_to_list(F) ++ "/" ++ integer_to_list(A);mfa_to_list(F) when is_atom(F) ->  atom_to_list(F).find_max(List) ->  find_max(List, 0).find_max([{V, _}|Tail], Acc) ->  find_max(Tail, lists:max([length(V), Acc]));find_max([], Acc) ->  Acc.collect_accs(List) ->  List1 = lists:filter(    fun({{sys, _, _}, _, _, _}) ->      false;      ({suspend,_,_,_}) ->        false;      ({{gen_fsm, _, _},_,_,_}) ->        false;      ({{gen, _, _},_,_,_}) ->        false;      ({{gen_server, _, _},_,_,_}) ->        false;      ({{proc_lib, _, _},_,_,_}) ->        false;      (_) ->        true    end, List),  calculate(List1).calculate(List1) ->  TotalACC = lists:sum([A || {_, _, A, _} <- List1]),  List2 = lists:foldl(fun({MFA, _, ACC, _},NewList) ->    Percent = ACC*100/TotalACC,    case round(Percent) of      0 -> NewList;      _ -> [{mfa_to_list(MFA), Percent}|NewList]     end   end,[],List1),  lists:reverse(List2).all_pids(Type) ->  lists:foldl(    fun(P, Acc) when P == self() ->      Acc;      (P, Acc) ->        case catch process_info(          P,[message_queue_len, memory, reductions,            dictionary, current_function, registered_name]) of          [{_, Len}, {_, Memory}, {_, Reds},            {_, Dict}, {_, CurFun}, {_, RegName}] ->            IntQLen = get_internal_queue_len(Dict),            if Type == queue andalso Len == 0 andalso IntQLen == 0 ->              Acc;              true ->                [{lists:max([Len, IntQLen]),                  Len,Memory, Reds, Dict, CurFun, P, RegName}|Acc]            end;          _ ->            Acc        end    end, [], processes()).get_internal_queue_len(Dict) ->  case lists:keysearch(‘$internal_queue_len‘, 1, Dict) of    {value, {_, N}} -> N;    _ -> 0  end.dump(N, Rs) ->  lists:foreach(    fun({_, MsgQLen, Memory, Reds, Dict, CurFun, Pid, RegName}) ->      io:format("** pid(~s)~n"      "** registered name: ~p~n"      "** memory: ~p~n"      "** reductions: ~p~n"      "** message queue len: ~p~n"      "** current_function: ~p~n"      "** dictionary: ~p~n~n",        [pid_to_list(Pid), RegName, Memory, Reds, MsgQLen, CurFun, Dict])    end, lists:sublist(Rs,N)).
View code

 

Onions, radishes, and tomatoes do not believe in pumpkin in the world. They think it is a fantasy. Pumpkin grows silently without talking. -- Yeerk schbigo: when the world is still young

[Erlang_question21] Application of Erlang performance analysis tool eprof fpdorf

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.