1)How is generative linguistics “rationalist” (rather than “empiricist”, I presume – or what even is the distinction meant)?

“Rationalism”, here refers to the philosophical position that innate reason is the primary source of knowledge, while “empiricism” refers to the position that experience is the primary source of knowledge. So, the extreme position of rationalism is solipsism, while the extreme position of empiricism is the “blank slate” conception of human psychology. Of course, I don’t think anyone seriously holds either of those positions. Generative linguistics is rationalist in the sense that it assumes that most of our knowledge of language is innate. [Note: I saw your reply to this comment, but I had already written this so I left it.]

2)I think Katz fails to distinguish generalizable and non-generalizable inferences. (1)-(3) is generalizable, you can freely substitute predicates in there. (4)-(5) is not, it requires additional lexical knowledge that “All men are male” (also known as hyponymic relation, which is bound to hold between some predicates if you have loads and loads of them for our real-world experience, which is, in the end, pretty repetitive). And if it is false, as some versions of trans-language – where ‘male’ usually refers to biological sex whereas ‘man’ refers to gender identification – suggest, then (4)-(5) immediately becomes a false inference – Socrates may well be a trans-man, and it’s a fact of world knowledge that the actual Socrates was cis-man and thus male.

In the versions of trans-language that you’re referring to, then, the word “man”, “male”, or both words mean(s) something different, meaning that in that language, (4) and (5) may have different semantic representations than the ones I intended. It’s a simple fix though. Replace “male” in (5) with “human” or “person” or “adult”. Note the point still holds If Socrates never existed, or was a time-travelling robot or was actually three kids in a trenchcoat. the point is that the truth of (4) is related to the truth of (5) in a logical way, not a factual way.

3)While “simple words” are “not-analyzable” in the sense that they are often not analyzed further into parts (though sometimes they are; and for many works in formal semantics omitting the part of lexical semantics where predicates like ‘bachelor’ are further decomposed is a simplification not to digress, not an ontological stance), they are part of analysis in the sense that they have a logical type, have valencies and so on.

That precisely what I meant by “non-analyzable”—“primitive”. Every theory needs primitives, and those primitives have certain properties, but they don’t have subparts. By analogy, consider genetics, whose primitive are nucleotides. A geneticist doesn’t deny that each nucleotide is made up of atoms which are made up of protons, neutrons, and electrons, which are made up of strings (maybe?), but they studiously ignore those facts in their theorizing.

4)Continuing the line of thought, the usual stance among those who mention this directly is that there are some lexical primitives, not that all, say, common nouns are primitives. I think Vierzbicka is the most explicit scholar in this regard, but her basic assumptions (not her suggestions for the actual set of primitives) seem to be somewhat shared.

In my experience, you’re right. Most working formal semanticists would likely agree that common nouns and verbs have underlying structure, but tend to say that studying that structure belongs to some other field (psychology, or philosophy, or literary theory, etc) or that it would be nice to be able to study sadly it’s beyond our abilities right now. Maybe Wierzbicka (I think that’s the proper Polish spelling) is an exception, though