This site is powered by CommentPress, which allows comments to be attached to individual paragraphs or to an entire document. To comment on a post, you first have to click on the title of the post itself in order to move from the excerpt to the full text. To leave a comment on a paragraph, click on the text itself, or on the speech bubble to its right, and the comment pane will open. You can also select specific text in the paragraph, which will be included at the beginning of the comment. To leave a comment on an entire post, click the link to “Comments on the whole post” at right.
Comments are moderated for first-time unregistered commenters, but only as a means of spam-prevention; comments will not be filtered for content. Commenters are required to submit name and email information in order to comment, but your email address will not be linked or displayed. Logged-in members of Humanities Commons can comment without the submission of additional information.
Do these records belong to a specific class of works (paper/book…) or to a peculiar timespan, or are they random records in the corpus?
Question about timespan making: are numbers relative to a span (let’s say the 80’s) to be referred only at the said span or are they comprehensive of previous works? In oder words: these numbers indicate works published during the timespan or works available during the timespan? In the first case, the stability of the percentage of “occasional” wittgensteinean authors could be interesting and would rise an interest towards percentage of other groups of authors (would it be useful to define classes other than the “occasional”one?).
I can’t understand the footnote on the tab “not including LW himself” referred to the number of authors with more than 3 publications with “Wittgenstein” in the title in the 90’s.
The simpliest way to “weigh” data in fig. 8 (as in figs. 2a and 2b) could be to use percentage on authors active in the decades (and indexed by PI).
I don’t know how PI record indexing activity deals with translations, and it would be useful to have this matter made explicit. For example is it a possibility to find a situation in which a record from a publisher (let’s say a spanish one) is just a translation of an important work in english or german? Of course it would be an interesting datum that spanish publishers want to have a wittgensteinean and adjourned catalogue, but in a different sense from a scenario in which many academic books about Wittgenstein are written in Spain (and originally in spanish).
On PI website dissertations are not mentioned as indexed, so this datum seems even more obscure. A possible explanation is that four dissertations with “Wittgenstein” in the title were published in one of the categories indexed (articles, books and e-books, dictionaries and encyclopedias, anthology and contributions to anthologies and book reviews), while keeping in their records that those were dissertations in the origin, but this hypothesis should be proved.
If I understood correctly, all the records of PI containing the tag “Wittgenstein” in the camp TITLE are collected here.
[there are other, more specialised “sources” devoted to LW (e.g. the Nordic Wittgenstein Review-NWR, official journal of the Nordic Wittgenstein Society-NWS; or Wittgenstein Studies-WS, a series established in 2010 by the International Ludwig Wittgenstein Society and “designed as an annual forum for Wittgenstein research.” At present only two volumes) and it is notewhorty they don’t figure in this list (indeed they figure very low in the ranking, NWR at 448 and WS at 150)]
It could be worthy to keep this fact in mind while doing distant reading on titles (in general): can we presume that papers meant to be published on journals specifically devoted to an author (or a topic) mentioned in the journal’s name are less likely to make explicit their being about that specific field? In other words: if someone writes a paper for the Nordic Wittgenstein Review we may assume that he will not need a title that makes explicit that the paper is about the Wittgenstein field. We could even think that papers contained in such journals and nevertheless using the name “Wittgenstein” in the title will be somehow peculiar if compared to others, in referring maybe to the man or to his ideas in contrast or continuity with other thinkers.
Per il futuro potresti essere interessati a usare philpapers come survey tool, è già stato fatto per capire quali siano le credenze dei filosofi…https://philpapers.org/surveys/index.html
An indicator for this might the bibliographical references in the paper instead of the title…
Website content © DR2 Open Peer Review 2026. All rights reserved.
Source: https://~^(?
I share Marco’s concerns on this. However, if I may intervene, research assessment based on bibliometrics can help uncovering loops of vicious practices such as self-citations, monopolizations of topics and (arguably) journal boards, self-sustaining debates disconnected from the field, and h-index tampering. This is particularly manifest in the humanities for cases such as the creation of ad hoc journals or the monopolization of supposedly “open” and prestigious journals by small networks of academics, usually from prestigious universities, that effectively bias the editorial choices towards those institutions’ most represented research topics.
In this respect, it might be worth considering (if you don’t do it elsewhere already) how the core-periphery relation is structured around methodological differences as well as the background debate.
Despite intuitively cogent, this observation should be corroborated by a weighted analysis of the continental/analytic divide that takes into account each author’s background, e.g. on the same data-set, it should be possible to operate an analysis based on the “citation” distance from a “paradigmatic” author for each tradition (“small world” style).
Checking for inter-disciplinarity is an extremely interesting aim for future research. I think some methodological considerations are needed beforehand to better ground the line of research:
how is inter-disciplinarity defined?
is it possible to quantify inter-disciplinarity and if so, how?
is it possible to track and quantify inter-disciplinary influence (e.g. in terms of recursive topics and debates that are bounced back and forth between fields)?
I would be surprised if many dissertation abstracts of an analytic bent contained proper names of philosophers. Is it really so?
Dear Guido,
given the experimental nature of our work, we were interested in testing different approaches. I cannot argue that this particular framework (term-mapping) is better suited than the previous one for the field of human geography, it is just a reflection of a different interest: tracing the fundamental elements of the “discourse” carried by this subset (intended as a representative one) of literature and display its evolution over time.
How is it a relevant information that the most frequent words in Nietzsche’s moral writings (see preceding comment @117) are value and virtue? 1. overly common and expectable 2. opposed moral theory models. Some Erörterung would be useful to the reader.
Just presenting the network graphs seems not very readable on the timeline. Maybe try and build graphs comparing the evolution in (relative) frequency of terms over time? And of significant couples/triples?
“Moral psychology” or moral vocabulary?
It would be of interest to read some discussion of the granularity choices (“passage”).
Could not the discrepancy simply be due to an informed judgement on relevance and distinctiveness over frequency?
In the caption the network representation is described as a “semantic map”. Relative frequency (size) is not semantic. There is some semantic in the relations between terms, but those are not really readable.
Dropping low-frequency connections (low-weight edges) could easily eliminate a clause like X is in essence Y.
An aside: somewhere, here quite apparently, the style is a bit over-patronizing (You might also strain yr eyes…)
Elaborate a bit with some example?
An obvious remark would be that these qualifiers are not tied, as a rule, to specific frequencies or frequence thresholds, so a sorite-like fallacy would be waiting behind the corner.
Dear Marco and Alessio,
thank you for your comments, which touch upon the more controversial side of the field of scientometrics, namely the use of scientometric indicators (or, more generally, scientometric techniques, including the science mapping) in the context of research performance evaluation.
It must be said that scientometrics has always had an applied side. The Science Citation Index was created by Eugene Garfield in the 1960s to address a very concrete problem, that is improving information retrieval in science. However, it was soon realized that citation scores calculated with the SCI could have had an application in the science policy. The Journal Impact Factor was one of the first metrics that was adapted to this purpose (even if it was originally designed as a device for selecting journals to be added to the SCI). Already in the 1970s scientometric indicators started to pop up in the OCSE reports.
In general, we can distinguish two types of scientometric research: descriptive scientometrics, whose aim is to understand quantitatively the structure and dynamics of science, and evaluative scientometrics, whose aim is to provide quantitative tools (mainly indicators) for the assessment and evaluation of science.
As I see the history of science mapping, this research programme has been developed mainly in the context of the descriptive scientometrics. In fact, Henry Small, the inventor of the co-citation analysis, is, in my opinion, one of the scientometricians who is mostly interested in the theory of science. He saw the co-citation analysis as an empirical development of Kuhn’s philosophy of science.
Nonetheless, I must add that science mapping has been soon recruited, so to say, within evaluative scientometrics. VOSviewer, for instance, is proposed and advertised by the CWTS as an effective tool for planning science-policy strategies (for instance, spotting the best papers of a university). In my opinion, this application of science mapping inevitably carried with itself all the issues of evaluative scientometrics.
In this post, however, we use science mapping for purely descriptive purposes, in the spirit of its origins. Indeed, we restrain from any interpretation of the results in evaluative terms. We do not claim that high citation scores are equal to research quality. In the paper I wrote with Valerio Buonomo, we offer some arguments for distinguishing the meta-philosophical and normative notion of quality from the descriptive notion of impact (i.e., the citation score).
Buonomo, V. & Petrovich, E. (2018). Reconstructing Late Analytic Philosophy. A Quantitative Approach. Philosophical Inquiries, 6(1): 149-180, DOI: https://doi.org/10.4454/philinq.v6i1.184
I agree with Nicola. We must always remember that the visualization algorithm attempts to fulfill a difficult task, namely “squeezing”, so to say, a multi-dimensional space, that cannot be understood by humans, into a 2-d visualization. This does not happen without a cost, that is called “stress”. Stress measures how much the 2-d visualization distorts the real structure of the data.
In order to control that the shape of the network is not an artifact of the algorithm but reflects somehow the underlying matrix of similarity, different algorithms can be used and their results compared.
In the case of our datasets, the overall structure of the map was robust to the different algorithms implemented by VOSviewer, so that we concluded that the visualization indeed captures the “real” structure of analytic literature as resulting from co-citation links.
Yes, this is a very good suggestion. We will consider it for further works!
This is an interesting hypothesis that I will be glad to test in future research. More generally, I am really interested in studying by citation analysis the relationships between specialized philosophies of science (e.g., philosophy of biology, physics, economics, etc.) and their target science.
I should reflect more on this, also by analyzing other science maps. However, my guess is that both the very definition of co-citation analysis and the algorithm implemented suggest that specialized documents should be placed in the periphery of the co-citation network.
I would say that a highly-specialized paper is cited by a smaller circle of papers compared to a generalist work, which is used almost by the whole community. Therefore, the first one will be placed easily within a cluster (i.e., a sub-specialty), whereas the latter will be, so to say, in the middle between different clusters and, thus, it will be placed in the center of the network.
However, I should definitely reflect more on this topic. Currently, I am trying to better figure out the very meaning of clusters, i.e., what do they represent.
Yes, this is a valuable suggestion, thank you. The interpretation of the different features of the map definitely requires a lot of knowledge of the discipline under analysis. I believe that only by collaboration such expert knowledge can be gathered and used to interpret the maps.
At the end of the day, algorithms allow probing in new ways the data. However, the interpretation of the finding inevitably requires humans expert in the field.
This is a very interesting and promising research question, thank you. I think it should be developed working in collaboration with someone more expert than me in network theory and the related mathematics. I hope to be able to do that in the future.
Yes, definitely. In the production of the late Kuhn, an epistemological theory explaining why specialization occurs in science. See Wray, K. B. (2011). Kuhn’s evolutionary social epistemology. Cambridge: Cambridge Univ. Press.
This is a very good suggestion for future work, thank you very much. As far as I know, one of the hottest themes in contemporary network theory is developing mathematical measures for comparing different networks. Such mathematical tools are definitely needed to advance from the qualitative assessment of science maps to a more robust quantitative analysis.
Yes, this is a true issue. There is a trade-off between the dimension of the dataset and the reliability of the mapping. If you work with too small timespans, you will not have enough data, and the mapping will be totally unreliable. Maybe a working solution for studying 5-year timespans could be to extend the number of journals considered, in order to have more data.
Moreover, there is a subtler issue, that has to do with the pace of philosophical change. Is this the same for all the philosophical areas? Are there some sub-disciplines that develop faster than others? What is the right time window for individuating stable structures in philosophy?
I think that the study of the “speed” of philosophical change is a topic that has completely escaped (as far as I know) the attention of the historians of philosophy.
As far as I know, the study of interdisciplinarity with quantitative methods has begun relatively recently in scientometrics. If I remember well, one of the strategies to address this topic consists of using betweenness centrality measures to find the publications links separate literature clusters together. See for instance
Porter, A. L., Cohen, A. S., David Roessner, J., & Perreault, M. (2007). Measuring researcher interdisciplinarity. Scientometrics, 72(1), 117–147. https://doi.org/10.1007/s11192-007-1700-5
The differences between multi-, inter-, cross-, and trans-disciplinarity and the related methodological issues have been highly debated, at least since these terms achieved the status of buzzwords or mantra in the science policy. The Oxford Handbook of interdisciplinarity provides a rich introduction to the topic:
Frodeman, R., Klein, J. T., & Pacheco, R. C. S. (Eds.). (2017). The Oxford handbook of interdisciplinarity (Second edition). Oxford, United Kingdom: Oxford University Press.
Actually, we did not aim at mapping the whole philosophy, but only generalist analytic philosophy. Clearly, in the future, we should extend the scope of the analysis in order to control whether the structure and dynamics we found in these five (representative) journals take place also more broadly.
It seems that the size of chunks become especially relevant when dealing with association. Have you ever made experiments with smaller or bigger chunks of text? Does that make any significant difference?
In particular, can different sizes be appropriate for different kinds of enquiry? Some examples would be interesting in this connection.
Maybe this is not the place for such considerations, but some more words on whether and how these results may confirm, correct or change traditional interpretations of Nietzsche’s philosophy would be helpful, especially for those who are not Nietzsche experts.
A naive question: do the thickness of the links and the distance between nodes represent different features?
[It seems that the farther one document is located in the map, the more specialized its content is.]
In your opinion, is this remark concerning specialization something that can only be gained through a careful reflection on the specific empirical results, or rather is a conclusion that can be drawn more or less a priori, given the way VOSviewer works?
[In this second case study, we tested a slightly different approach, analyzing the recent evolution of the main research themes in the discipline of human geography via term maps.]
Is this choice only dictated by the wish to experiment with a different approach, or there are reasons that suggest that this approach is better suited to the case of human geography? If the latter, why?
Sorry for the nitpicking, but I can see a tension between two claims: “they find application in the science policy” (descriptive claim) and “where they can be used to assess strengths and weaknesses in the research performance of institutions” (normative claim, given that you use “can”?).
In particular, even though it is partially unrelated to the broader scope of this article, I won’t mind some comments on whether (and when, and to what extent, and how…) it is appropriate to use scientometrics for evaluative purposes. I am pretty skeptical about some applications, especially when bibliometrics vicariate human choices.
An hypothesis I’d love to test is whether we can divide two strands of philosopohy of mind: namely, one stemming from philosophy of language, and the other from philosophy of science (and philosophy of given sciences, such as biology). My bet is that language-inspired philosophy of mind is more related to this paradigm, whereas science-inspired philosophy of mind draws much more from empirical papers.
Arguably, the same happens in other fields.
See for instance Cedrini, Fontana (2018) “Just another niche in the wall? How specialization is changing the face of mainstream economics”. Cambridge Journal of Economics, 42, 427–451
[we would like to extend the overall timespan, reducing the single intervals]
I suspect that as you move the timespan in the past, you find increasingly scattered, unreliable data. Or is there a countermove?
This might however introduce a kind of bias. Assuming that specialization and clustering is a common practice in humanities and social sciences, you might have overlooked some disciplines only because they had already developed their own niche.
this paragraph prompts me to reflect upon what I might call “naive bibliology”. While naive physics and folk psychology refer to our everyday expectations and proto-theories about how the phyisical objects and the minds (respectively) behave, naive bibliology is about our proto-theories and naive expectations about books, articles, and other kinds of writings. We expect, for instance, that they have a single author, which is totally responsible for what (s)he writes; we expect that they have a publication year; and so on and so forth. Now, while these expectations are not totally unreasonable, it is very nice when a paragraph like this ‘unpacks’ writings into their complex genesis and remember us that autorship and similar concepts are sometimes approximation, abstraction, or whaterver.
Interesting point following my earlier comment.
… or/and the language toward which PI is biased!
An explanation on how you operationalize “wittgensteinian” is in order though! Should LW’s name appear in abstract? In title? Or is any citation in bibliography sufficient?
(… or did I miss it?)
How does this differ from the distribution in other topics?http://dailynous.com/2019/05/30/visualization-gender-distribution-philosophy-research-topics/
Again, there might be some biases due to using PI as a source
However, Mario Cedrini (another researcher who applies topic modeling to the history of economic thought, and an affiliate member of DR2) notes that abstracts are intentional, i.e. they reflect much more what the authors emphasize, as opposed to full texts, which may embed the unintentional parts of the creative process much more.
The “italics bias” hypothesis might be worth testing. If this is true, it might be generalized to so many sources that it might compel us to revise our use of italics.
I guess that a nice strategy might consist in circuscribing some sources from the secondary literature and then comparing the semantic network of these papers with Nietzsche’s own. If you see a bias in the secondary literature toward italicized words, then that’s it. Or am I wrong?
(Bonus: if different languages italicize different words, that might be even more interesting).
Yes, that’s right. But of course we can come up with reasonable-yet-defeasible thresholds for what counts as “often.”
This is a good idea. So far, I have only used paragraphs as my window size. The resulting dataset is already quite sparse, so I am skeptical that the sentence level would still be informative. By contrast, the chapter level would probably be quite interesting, and the book level would probably be too broad.
Yes, this approach is best-suited to a well-known corpus. Otherwise, it’s probably best just to use stopwords.
Maybe I’m missing something, but all the disjunctions are listed in the above table.
My (uncontroversial?) assumption is that language expresses thought, so the vocabulary represents the psychology.
For Nietzsche studies: supplement my list of concepts/terms with the keywords from all journal articles about Nietzsche.
For other philosophers: do the same process but with their corpus.
My thought here is that the maps I’m creating set a sort of default. We might move from the default if we think that a particular passage is especially important, but that requires argument. Nietzsche studies suffers from a huge amount of cherry-picking by commentators, so establishing this default is really important.
Sorry — I get annoyed by much of the secondary literature. 😉
The italics bias hypothesis sounds very plausible to me!
This kind of correspondence between concept and operationalization seems to be useful for a relatively small and homogenous corpus previously known to the researcher, but could it be misleading for a research on a larger or more diversified corpus -e.g. one comprehensive of contributions from a large number of authors?
Supposedly, thickness of the links and distances between nodes represent the same feature, namely, co-citational values. Though, the link only represent the relation occurring between two nodes, while the position of nodes in the map (so, their relative distances) is the result of normalization methods aiming to represent the relation of every node with the whole dataset. Arguably, thickness of links is a more reliable kind of datum than relative distance between nodes, because the first is determined by clearer and easier-to-define relations than the second.
It could be useful to show a sequence of maps with progressive complexity in represented features (like starting with b/w dots, adding lines and then colours) and compare readability.
It could be interesting to produce different maps with different normalization methods relatively to the single spans and compare them and their usefulness in showing pattern evolution. Could this kind of comparison be useful to individuate papers relevant to specialization process? The examination of these liminal papers by domain experts could also be a further validation of this whole approach.
[there are other, more specialised “sources” devoted to LW (e.g. the Nordic Wittgenstein Review-NWR, official journal of the Nordic Wittgenstein Society-NWS; or Wittgenstein Studies-WS, a series established in 2010 by the International Ludwig Wittgenstein Society and “designed as an annual forum for Wittgenstein research.” At present only two volumes) and it is notewhorty they don’t figure in this list (indeed they figure very low in the ranking, NWR at 448 and WS at 150)]
It could be worthy to keep this fact in mind while doing distant reading on titles (in general): can we presume that papers meant to be published on journals specifically devoted to an author (or a topic) mentioned in the journal’s name are less likely to make explicit their being about that specific field? In other words: if someone writes a paper for the Nordic Wittgenstein Review we may assume that he will not need a title that makes explicit that the paper is about the Wittgenstein field. We could even think that papers contained in such journals and nevertheless using the name “Wittgenstein” in the title will be somehow peculiar if compared to others, in referring maybe to the man or to his ideas in contrast or continuity with other thinkers.
If I understood correctly, all the records of PI containing the tag “Wittgenstein” in the camp TITLE are collected here.
On PI website dissertations are not mentioned as indexed, so this datum seems even more obscure. A possible explanation is that four dissertations with “Wittgenstein” in the title were published in one of the categories indexed (articles, books and e-books, dictionaries and encyclopedias, anthology and contributions to anthologies and book reviews), while keeping in their records that those were dissertations in the origin, but this hypothesis should be proved.
I don’t know how PI record indexing activity deals with translations, and it would be useful to have this matter made explicit. For example is it a possibility to find a situation in which a record from a publisher (let’s say a spanish one) is just a translation of an important work in english or german? Of course it would be an interesting datum that spanish publishers want to have a wittgensteinean and adjourned catalogue, but in a different sense from a scenario in which many academic books about Wittgenstein are written in Spain (and originally in spanish).
The simpliest way to “weigh” data in fig. 8 (as in figs. 2a and 2b) could be to use percentage on authors active in the decades (and indexed by PI).
I can’t understand the footnote on the tab “not including LW himself” referred to the number of authors with more than 3 publications with “Wittgenstein” in the title in the 90’s.
Question about timespan making: are numbers relative to a span (let’s say the 80’s) to be referred only at the said span or are they comprehensive of previous works? In oder words: these numbers indicate works published during the timespan or works available during the timespan? In the first case, the stability of the percentage of “occasional” wittgensteinean authors could be interesting and would rise an interest towards percentage of other groups of authors (would it be useful to define classes other than the “occasional”one?).
Do these records belong to a specific class of works (paper/book…) or to a peculiar timespan, or are they random records in the corpus?
What is the pattern, if not the patterns, in the Wittgensteinian map? How could they be called? Maybe the fact that they are not as recognizable and easy to pigeonhole as the “science-oriented style” shows a difficulty that those who were influenced by Wittgenstein had in the pursuit of their career: to justify their work to a philosophical community which was largely (if not almost exclusively) influenced by the science-oriented style; a philosophical community which expected from its members arguments, theories, and so on; and judged with some skepticism any other attempt of doing philosophy.
I wondered the same thing.
A related general issue, which I find interesting, is: what is the relevance and the weight of (informal and even implicit) quantitative considerations in traditional, non-quantitative works?
A follow up to Guido’s above remark. I agree that the appropriateness of the size of chunks does not depend only on the philosopher in question, but also on the aim of the enquiry. This is an important methodological issue, but it is not clear, at least to me, how to figure it out. It would be interesting to work the reverse way, that is, to test different sizes and then wonder: What do I learn in each different case? What kind of enquiry is this?
It would be useful to have an explicit description of such disjunctions, and some examples of the kind of conceptual analysis that is required here. Also a reflection on the very notion of a concept employed here would be welcome.
Fig 1 is not very clear: does it represent the number of editions of LW? The graphics and the colour does not help
Il modo il cui in filosofia analitica si trattano i classici è implicitamente questo. Non si fa una ricostruzione storica, ma si “prende” quello che il classico può relativamente al contesto del dibattito contemporaneo e si “usa” il capitale scientifico del classico per sostenere o attaccare alcune idee. Questo è stato fatto proprio su LW da Saul Kripke (che è stato criticato dai custodi della lettura filosofica di LW come Hacker & Backer). Una operazione simile è stata fatta con Descartes da Joseph Almog.
Certo, rappresentano l’impatto pubblico dal punto di vista degli editor del Philosophers’ Index. Forse sarebbe utile capire il profilo degli editor che hanno gestito e stanno gestendo questo database per avere una rappresentazione precisa della prospettiva di questa rappresentazione dell’impatto pubblico di LW.
E’ sempre PI la fonte? Non è detto esplicitamente.
E’ davvero incredibile che vi siano solo 4 dissertation!
Grazer publishes a lot in English. Philosophical Investigations is perceivd among the community of analytic philosophers as very low rank journal (for some I guess is just dogmatic and sectarian) which is based in Hertfordshire where Moyal Sherrock works (http://researchprofiles.herts.ac.uk/portal/en/persons/daniele-moyalsharrock(47f89381-fa27-468b-9c9e-1dc21b6eed81).html), she has played and important role in building the Wittgensteinein community. It is really noticeable to notice the absences in this list of top tier journal such as Nous, Journal of Philosophy, Philosophical Review and Australasian Journal of Philosophy (it seems amazing to me that there is not even a paper on Wittgenstein though, maybe they are not listed because under 20…)
It is really telling that OUP scores so low. Recently also Palgrave has been investing a lot on LW (however now it is a Springer division).
I have doubt on this data. I just checked on philpapers Crispin Wright and he scored with more 10 works on LW:
https://philpapers.org/s/Crispin%20Wright%20%26%20wittgenstein
Do you mean “Pears”?
This is very interesting. Unfortunately the names are very small in the picture…
I have a doubt: MEANING is naturally understood as subsmumed by LANGUAGE. Isn’t there a danger of double counting without establishing prior hierarchies (semantically based) between the topics?
My feeling is the following: metaphysics and phil of math had a big rise in the third period in the analytical community (it is just a preception, it needs to gounded on data of course) and this heppened by marginalizing LW whose thought was too much anti-theoretical on these topics…
Given the Appendix I would call 11 Epistemology
This figure is not easy to read: I dont understand if the coloured surface has a representational meaning or if what is relevant is just the vertical value.
An indicator for this might the bibliographical references in the paper instead of the title…
Per il futuro potresti essere interessati a usare philpapers come survey tool, è già stato fatto per capire quali siano le credenze dei filosofi…https://philpapers.org/surveys/index.html