Use Cases/Researchers

LogMark for Researchers

Track claims, hypotheses, and contradictions across your literature review.

The Problem

You're reading a paper and it contradicts something you read two weeks ago. The contradiction matters - it might be a gap in the field, a methodological difference, or an error. But you can't remember which paper, which claim, or where you noted it.

Research generates a constant stream of micro-observations: claims worth verifying, methods worth comparing, contradictions worth investigating. Most researchers capture these in margin annotations, scattered notes, or not at all. The cross-paper synthesis that produces original insight requires connecting fragments spread across months of reading.

Why LogMark

Every capture is timestamped and routed. Tag with paper names, methods, or research questions. The vault accumulates a searchable corpus of your observations. Contradictions captured months apart surface when you search by topic. Blocks captured during analysis become the gaps that define your contribution.

Workflows

Claim capture during reading

+literature i: Smith 2024 claims attention heads are redundant above layer 8 -- contradicts Chen 2023
+literature i: novel use of contrastive loss for few-shot classification -- check if this applies to our setting

Hypothesis capture

+thesis i: if attention redundancy is layer-dependent, pruning strategies should be layer-aware

Method notes

+methodology d: using bootstrapped confidence intervals instead of t-tests -- handles non-normal distributions in our data

Contradiction logging

+literature b: Smith 2024 and Chen 2023 disagree on attention redundancy -- different model sizes? Need to check

Research task tracking

+thesis t: re-run ablation with layer-wise pruning by end of month
+literature t: read the 3 papers from ICML 2025 on efficient attention

Notation Guide

+thesis, +experiment-1, +grant-proposal - Project routing
+literature, +methodology, +statistics - Domain routing for knowledge areas
#attention, #pruning, #few-shot - Topic tags
#smith-2024, #icml-2025 - Paper and venue tags
t:, b:, d:, i: - Quick entry types

Example Research Day

9:00 AM
Morning planning.
t: finish literature review section on attention pruning +thesis t: email co-author about experiment timeline
10:00 AM
Reading a new paper. A claim jumps out.
+literature i: Wang 2025 shows 40% of attention heads can be pruned without performance loss on GLUE -- stronger claim than Smith 2024's layer-8 finding
10:15 AM
The connection crystallizes.
+thesis i: the discrepancy between Wang and Smith might be task-dependent -- GLUE vs generative tasks. This could be our angle.
11:30 AM
Experiment planning.
+thesis d: running pruning experiments on both GLUE and generative benchmarks -- if results diverge, that's the paper's contribution
2:00 PM
Experiment hits a wall.
+experiment-1 b: GPU memory insufficient for full model with generative benchmark -- need to request cluster access or reduce batch size
4:00 PM
Advisor meeting generates clarity.
+thesis d: framing the paper as "task-dependent attention redundancy" rather than just pruning -- broader contribution, clearer narrative