The computation of several graph measures, based on the distance between nodes, is very often part of the analysis of real-world complex networks. The diameter, the betweenness and the closeness centrality, and the hyperbolicity are typical examples of such measures. In this talk, we will focus on the computation of one specific measure, that is, the node closeness centrality which is basically the inverse of the average distance of a node from all other nodes of the graph. Even though polynomial-time algorithms are available for the computation of this measure, in practice these algorithms are not useful, due to the huge size of the networks to be analysed. One first theoretical question is, hence, whether better algorithms can be designed, whose worst-case complexity is (almost) linear in the size of the input graph. We will first show that, unfortunately, for the closeness centrality no such algorithm exists (under reasonable complexity theory assumptions). This will lead us back to the practical point of view: we will then describe a heuristics that allows us to compute the above measure in (practical) linear time, even though its worst-case complexity is (in practice) intractable. This result will finally motivate our return to theory in order to understand the reason why, in practice, this heuristics works so well: we will indeed conclude by showing that, in the case of several random graph generating models, the average time complexity of the heuristics is indeed (almost) linear. This talk will summarise the research work that I have done in collaboration with Elisabetta Bergamini, Michele Borassi, Michel Habib, Andrea Marino, Henning Meyerhenke, and Luca Trevisan, and will be mostly based on two papers presented at ALENEX 2016, and SODA 2017.