LogMark for Researchers
Track claims, hypotheses, and contradictions across your literature review.
The Problem
You're reading a paper and it contradicts something you read two weeks ago. The contradiction matters - it might be a gap in the field, a methodological difference, or an error. But you can't remember which paper, which claim, or where you noted it.
Research generates a constant stream of micro-observations: claims worth verifying, methods worth comparing, contradictions worth investigating. Most researchers capture these in margin annotations, scattered notes, or not at all. The cross-paper synthesis that produces original insight requires connecting fragments spread across months of reading.
Why LogMark
Every capture is timestamped and routed. Tag with paper names, methods, or research questions. The vault accumulates a searchable corpus of your observations. Contradictions captured months apart surface when you search by topic. Blocks captured during analysis become the gaps that define your contribution.
Workflows
Claim capture during reading
+literature i: Smith 2024 claims attention heads are redundant above layer 8 -- contradicts Chen 2023 +literature i: novel use of contrastive loss for few-shot classification -- check if this applies to our setting
Hypothesis capture
+thesis i: if attention redundancy is layer-dependent, pruning strategies should be layer-aware
Method notes
+methodology d: using bootstrapped confidence intervals instead of t-tests -- handles non-normal distributions in our data
Contradiction logging
+literature b: Smith 2024 and Chen 2023 disagree on attention redundancy -- different model sizes? Need to check
Research task tracking
+thesis t: re-run ablation with layer-wise pruning by end of month +literature t: read the 3 papers from ICML 2025 on efficient attention