Because a blog write too long can not be published, which we will continue to analyze the dist.c in the setnode_3 of the function and Net_kernel to get connected after the success of what to do.
Bif_rettype setnode_3 (bif_alist_3) { bif_rettype ret; Uint flags; unsigned long version; eterm ic, oc; Eterm *tp; DistEntry *dep = NULL; Port *pp = NULL; /* Prepare for Success */ erts_bif_prep_ret (ret, am_true); /* * check and pick out arguments * / if (!is_node_name_atom (bif_arg_1) | | Is_not_internal_port (bif_arg_2) | | (Erts_this_node->sysname == am_noname)) { goto badarg; } if (!is_tuple (bif_arg_3)) goto badarg; tp = tuple_val (BIF_arg_3); if (*tp++ != make_arityval (4)) goto badarg; if (!is_small (*TP)) goto badarg; flags = Unsigned_val (*tp++); if (!is_small (*TP) | | (Version = unsigned_val (*TP)) == 0) goto badarg; ic = * (++TP); oc = * (++TP); if (!is_atom ( IC) | | !is_atom (OC)) goto badarg; /* dflag_extended_references is compulsory from R9 and forward */ if (!) ( dflag_extended_references & flags)) { erts_dsprintf_buf_t *dsbufp = Erts_create_logger_dsbuf (); erts_dsprintf (dsbufp, "%T", bif_p->common.id); if (BIF _p->common.u.alive.reg) erts_dsprintf (dsbufp, " (%T)", bif_p->common.u.alive.reg->name); erts_dsprintf (dsbufp, " attempted to enable connection to node %T " " Which is not able to handle extended references.\n ", BIF_ arg_1); erts_send_error_to_logger (BIF_P->GROUP_LEADER, DSBUFP); goto badarg; } /* * arguments seem to be in order. */ /* get dist_ Entry */ dep = erts_find_or_insert_dist_entry (BIF_ARG_1); if (dep == erts_this_dist_entry) goto badarg; else if (!DEP) goto system_limit; /* should never happen!!! *///gets the structure of port via the port ID &nbsP; pp = erts_id2port_sflgs (bif_arg_2, bif_p, erts _proc_lock_main, erts_port_sflgs_invalid_lookup); erts_smp_de_ Rwlock (DEP); if (!pp | | (Erts_atomic32_read_nob (&pp->state) & erts_port_sflg_exiting)) goto badarg; if ((pp->drv_ptr->flags & erl_drv_flag_soft_busy) ==  0) goto badarg;//If the ID of the current CID and the incoming port is the same, and the Port Sist_entry and the DEP found are the same//then go directly to the end stage if (DEP->CID == BIF_ARG_2 && PP->DIST_ENTRY == DEP) goto done; /* Already set */ if (dep->status & erts_de_sflg_exiting) { /* suspend on dist entry waiting for the exit to finish */ ertsproclist *plp&Nbsp;= erts_proclist_create (bif_p); plp->next = null; erts_suspend (BIF_P, Erts_proc_lock_main, null); erts_smp_mtx_lock (&dep->qlock); erts_proclist_store_last ( &DEP->SUSPENDED, PLP); erts_smp_mtx_unlock (&dep->qlock); goto yield; } assert (! ( dep->status & erts_de_sflg_exiting)); if (pp->dist_entry || is_not_nil (Dep->cid)) goto badarg; erts_atomic32_read_bor_nob (& pp->state, erts_port_sflg_distribution); /* * Dist-ports do not use the "Busy port message queue" functionality, but * instead use "Busy dist entry" functionality. */ { erldrVsizet disable = erl_drv_busy_msgq_disabled; erl_drv_busy_msgq_limits (ERTS_Port2ErlDrvPort (PP), &disable, null); }//update the dist pp-> associated with port dist_entry = dep; dep->version = version; dep->creation = 0; assert (pp->drv_ptr->outputv | | pp->drv_ptr->output); #if 1 dep->send = (pp->drv_ptr- >outputv ? dist_port_commandv : dist_port_command); #else dep- >send = dist_port_command; #endif assert (dep->send), #ifdef debug erts_smp_mtx_lock (&dep->qlock); assert (dep->qsize == 0); erts_smp_mtx_unlock (&dep->qlock); #endif//Update Dist_entry CID erts_set_dist_entrY_connected (dep, bif_arg_2, flags); if (Flags & DFLAG_DIST _hdr_atom_cache) create_cache (DEP), erts_smp_de_rwunlock (DEP); dep = NULL; /* inc of refc transferred to port (dist_ Entry field) *///increase the number of remote nodes inc_no_nodes ();//Send monitoring information to the calling process send_nodes_mon_msgs (BIF_P,AM_NODEUP,BIF_ARG_1,FLAGS & DFLAG_PUBLISHED ? AM_ Visible : am_hidden,nil); done: if (DEP && DEP != erts_this_dist_entry) { erts_smp_de_rwunlock (DEP); erts_deref_dist_entry (DEP); } if (PP) erts_port_release (PP);  RETURN RET; YIELD:    ERTS_BIF_PREP_YIELD3 (ret, bif_export[BIF_ Setnode_3], bif_p, bif_arg_1, bif_arg_2, bif_arg_3); goto done; badarg: erts_bif_prep_error (ret, bif_p, badarg); goto done; system_ Limit: erts_bif_prep_error (Ret, bif_p, system_limit); Goto done;}
Setnode_3 first, the name of the resulting remote node is placed in the Dist hash table, and the table entry is associated with the port connected to the remote node.
The port that is connected to the remote node is then marked as erts_port_sflg_distribution so that when a port appears busy we can distinguish between a normal port or a remote connection port, when a port is destroyed. Do you want to call Erts_do_net_exits in dist.c to tell ERTs that the remote node has been dropped? When all this is done successfully, the message will be broadcast in this erts internal nodeup, so who is the recipient of nodeup? The recipient of the nodeup is the process that sets the monitor_nodes tag through the Process_flag function, of course monitor_nodes this option is not in the document. If we want to listen to the Nodeup event, we can only do it through the net_kernel:monitors function.
We said last time that the process responsible for connecting the remote node notifies the Net_kernel process, so let's go on to see what Net_kernel is doing with the message.
handle_info ({Setuppid, {nodeup,node,address,type , immediate}}, state) -> case {immediate, ets: Lookup (Sys_dist, node)} of{true, [conn]} when conn#connection.state =:= pending, conn#connection.owner =:= setuppid -> ets:insert (sys_dist, conn#connection{state = up, address = address, Waiting = [], type = type}), setuppid ! {self (), inserted}, reply_waiting (node,conn#connection.waiting, true), {noreply, state};_ -> setuppid ! {self (), bad_ Request}, {noreply, state} end;
Update the status in ETS and send a message to all waiting processes to tell them that the remote connection has been successful and you can proceed with the next operation.
This time you will be surprised to find out, where is the heartbeat? No hurry, we'll look back at the init function of Net_kernel
Init ({name, longorshortnames, tickt}) -> process_flag (trap_exit,true) , case init_node (name, longorshortnames) of{ok, Node, Listeners} -> process_flag (Priority, max), Ticktime = to_integer (Tickt), ticker = spawn_link (Net_kernel, ticker, [self (), ticktime]), {ok, #state {name = name,node = Node,type = LongOrShortNames,tick = #tick {ticker = ticker, time = ticktime},connecttime = connecttime (), Connections =ets:new (sys_dist,[named_table,  PROTECTED,  {KEYPOS, 2}]), Listen = listeners,allowed = [],verbose = 0 }}; Error -> {stop, error} &nbsP;end.
Net_kernel first creates a ticker process that is specifically responsible for making the heartbeat to the Net_kernel process, and then the Net_kernel process traverses all the remote connected processes, giving them a heartbeat. When we change the heartbeat time of a node, we start a aux_ticker process that helps us to overdo it until all the nodes know we've changed the heartbeat cycle, and when all the nodes know that we've changed the heartbeat cycle, this aux_ The ticker process also ended its historic mission of quiet exit.
So how to find out the remote node, of course, TCP data transmission has failed port was cleaned out, this can be seen in dist.c erts_do_net_exits.
When all this is done, we will continue to go back to the global module and the Global_group module to analyze the next nodeup, how two nodes synchronize their global names.
Let's talk about Erlang's node interconnection (ii)