Skip to content

Commit

Permalink
kb autocommit
Browse files Browse the repository at this point in the history
  • Loading branch information
Jemoka committed Feb 27, 2025
1 parent 2421bd8 commit b37d55b
Show file tree
Hide file tree
Showing 15 changed files with 556 additions and 96 deletions.
30 changes: 13 additions & 17 deletions content/posts/KBhapproximation_algorithms.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Every statement that has a **polynomial time checkable** proof has such a proof

### PCP Theorem {#pcp-theorem}

For some constant \\(\alpha > 0\\), and for ever language \\(L \in NP\\), there exists a polynomial-time [computable function]({{< relref "KBhmapping_reduction.md#computable-function" >}}) that makes every input \\(x\\) into a [3cnf-formula]({{< relref "KBhnon_deterministic_turing_machines.md#3cnf-formula" >}}) \\(f(x)\\) such that...
For some constant \\(\alpha > 0\\), and for ever language \\(L \in NP\\), there exists a polynomial-time that makes every input \\(x\\) into a \\(f(x)\\) such that...

1. if \\(x \in L\\), then \\(f(x) \in \text{SAT}\\)
2. if \\(x \neq L\\), then there is no assignment that satisfies more than \\(\qty(1-\alpha)\\) fraction of \\(f(x)\\) clauses
Expand All @@ -24,22 +24,18 @@ That is, a sufficiently good approximation of [Max-SAT](#max-sat) will imply \\(

## Provability {#provability}

By definition everything in \\(NP\\) has a short and checkable proof (in polynomial time) ...same can go for coNP and [PSPACE]({{< relref "KBhspace_complexity.md#pspace" >}}) if we **add interaction**.
By definition everything in \\(NP\\) has a short and checkable proof (in polynomial time) ...same can go for coNP and if we **add interaction**.

**every problem in has an interactive proof!!!**

### Interactive Proofs {#interactive-proofs}

We have prover \\(P\\) and verifier \\(V\\). The \\(V\\) asks \\(P\\) for membership statements, and \\(P\\) responds with statements. These proofs can be used to prove membership in very powerful languages.
### Zero-Knowledge Proof {#zero-knowledge-proof}

This is a type of interactive proof that reveal **no knowledge** other than the membership query you asked; i.e. I give no witness but I will convince you.

### Graph Non-Isomorphism {#graph-non-isomorphism}
You can actually formulate any **PSPACE** as a [Zero-Knowledge Proof](#zero-knowledge-proof).

A graph \\(G\\) and \\(H\\) are [isomorphic]({{< relref "KBhisomorphism.md" >}}) if you can rename \\(G\\) to get \\(H\\) (i.e. they are same up to renaming).

- GraphIsomorphism = {(G,H) | G and H are isomorphic}
- GraphNonIsomorphism = {(G,H) | G and H are not isomorphic}

GraphIsomorphism is in NP. But, how do we show that two things are **not** isomorphic?
You will notice that the proof above is kind of zero-knowledge (if I am simply trying to verify stuff is isomorphic, knowing which graph is isomorphic adds no other information)


## Approximation Hardness {#approximation-hardness}
Expand All @@ -53,7 +49,7 @@ To show these results, we will need approximation-preserving reductions

### approximation preserving reductions {#approximation-preserving-reductions}

for instance, [clique problem]({{< relref "KBhnon_deterministic_turing_machines.md#clique-problem" >}}) (\\(3SAT \leq\_{P} \text{CLIQUE}\\)) is **very** approximation preserving because the size of the clique corresponds exactly to the number of clauses you can satisfy.
for instance, (\\(3SAT \leq\_{P} \text{CLIQUE}\\)) is **very** approximation preserving because the size of the clique corresponds exactly to the number of clauses you can satisfy.

However, (\\(\text{IS} \leq\_{P} \text{{Vertex-Cover}}\\)) is super not approximation preserving; recall that our argument is that \\(V - IS\\) is a vertex cover, meaning \\(\qty(G, k) \Leftrightarrow (G, |V|-k)\\) is the sizes of the independent set and vertex covers respectively.

Expand All @@ -63,11 +59,11 @@ These are not good approximations of each other; suppose the minimum vertex cove
## example {#example}


### [vertex cover]({{< relref "KBhnp_complete.md#vertex-cover" >}}) {#vertex-cover--kbhnp-complete-dot-md}
### {#d41d8c}

Recall [vertex cover]({{< relref "KBhnp_complete.md#vertex-cover" >}}) problem:
Recall problem:

we want to find the smallest [vertex cover]({{< relref "KBhnp_complete.md#vertex-cover" >}})---this could be approximated greedily which will find a [vertex cover]({{< relref "KBhnp_complete.md#vertex-cover" >}}) which is at most twice as large as the original (a "two-approximation")
we want to find the smallest ---this could be approximated greedily which will find a which is at most twice as large as the original (a "two-approximation")

**the algorithm**: set \\(C = \emptyset\\), and while there exists an uncovered edge \\(e\\), add both endpoints of such an \\(e\\) to \\(C\\)

Expand All @@ -79,13 +75,13 @@ we want to find the smallest [vertex cover]({{< relref "KBhnp_complete.md#vertex

### Max-SAT {#max-sat}

given a [cnf-formula]({{< relref "KBhnon_deterministic_turing_machines.md#3cnf-formula" >}}) (not just 3cnf), how many clauses can be satisfied? a **maximization problem** because we want to satisfy the maximal amount of clauses.
given a (not just 3cnf), how many clauses can be satisfied? a **maximization problem** because we want to satisfy the maximal amount of clauses.

approximation: we can always satisfy a constant frication of all of the clauses: that is, when all clauses have at least 3 unique literals, we can satisfy at least 7/8 of all the clauses (i.e. we will get to \\(\geq \frac{7}{8}\\) clauses of the clauses than optimal solution).

We can't solve this even further (i.e. we can't solve up to \\(\frac{7}{8}+\varepsilon\\) for any \\(\varepsilon > 0\\)) without \\(P = NP\\). see [PCP Theorem](#pcp-theorem)


### [clique problem]({{< relref "KBhnon_deterministic_turing_machines.md#clique-problem" >}}) {#clique-problem--kbhnon-deterministic-turing-machines-dot-md}
### {#d41d8c}

for other problems (such as cliques), no constant approximation may even occur
19 changes: 10 additions & 9 deletions content/posts/KBhcomplexity_theory_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,16 @@ draft = false

## Lectures {#lectures}

- [SU-CS254 JAN062025]({{< relref "KBhsu_cs254_jan062025.md" >}})
- [SU-CS254 JAN082025]({{< relref "KBhsu_cs254_jan082025.md" >}})
- [SU-CS254 JAN132025]({{< relref "KBhsu_cs254_jan1032025.md" >}})
- [SU-CS254 JAN152025]({{< relref "KBhsu_cs254_jan152025.md" >}})
- [SU-CS254 JAN222025]({{< relref "KBhsu_cs254_jan222025.md" >}})
- [SU-CS254 JAN272025]({{< relref "KBhsu_cs254_jan272025.md" >}})
- [SU-CS254 JAN292025]({{< relref "KBhsu_cs254_jan292025.md" >}})
- [SU-CS254 FEB032025]({{< relref "KBhsu_cs254_feb032025.md" >}})
- [SU-CS254 FEB122025]({{< relref "KBhsu_cs254_feb122025.md" >}})
-
-
-
-
-
-
-
-
-
- [SU-CS254 FEB262025]({{< relref "KBhsu_cs254_feb262025.md" >}})


## Logistics {#logistics}
Expand Down
8 changes: 4 additions & 4 deletions content/posts/KBhcounterfactual.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
+++
title = "counterfactual"
title = "counterfactual (quantum)"
author = ["Houjun Liu"]
draft = false
+++

[quantum information theory]({{< relref "KBhquantum_information_theory.md" >}}) requires manipulating counterfactual information---not what the current known states are, but what are the _next possible states_.
requires manipulating counterfactual information---not what the current known states are, but what are the _next possible states_.

Inside [physics]({{< relref "KBhphysics.md" >}}), there is already a few principles which are counterfactual.
Inside , there is already a few principles which are counterfactual.

1. Conservation of energy: a perpetual machine is **\*impossible**
2. Second law: its ****impossible**** to convert all heat into useful work
3. Heisenberg's uncertainty: its ****impossible**** to copy reliable all states of a qubit

With the impossibles, we can make the possible.
With the impossibles, we can make the possible.
7 changes: 7 additions & 0 deletions content/posts/KBhcounterfactuals.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
+++
title = "counterfactual"
author = ["Houjun Liu"]
draft = false
+++

"if thing didn't happen would I have..."
99 changes: 99 additions & 0 deletions content/posts/KBhexplainability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
+++
title = "explainability"
author = ["Houjun Liu"]
draft = false
+++

<div class="definition"><span>

Explainability is the study of, when stuff breaks, understanding why it does.

</span></div>

Here are a set of explainability techniques!


## policy visualization {#policy-visualization}

<div class="definition"><span>

Roll your system out and look at it

</span></div>

Some common strategies that people use to do this:

- plot the policy: look at what the agent says to do at each state (if you have too many dimensions, just plot slices!)
- **slicing**: one way to deal with history-dependent trajectories is to then just count the number of actions that your system takes at each step, and plot the argmax of it


## feature importance {#feature-importance}

Our goal is still to understand the contribution of various features to the overall behavior of a system.


### sensitivity analysis {#sensitivity-analysis}

<div class="definition"><span>

[sensitivity analysis](#feature-importance) allows us to understand how a particular output changes when a single feature is changed

- take a feature
- screw with it
- how does it contribute to the variance of the outcomes?

</span></div>

<div class="theorem"><span>

this is really slow

</span></div>

<div class="proof"><span>

....because preturbing each input sequentially is exponential in search space.

</span></div>

So instead, we could consider something like a

<div class="definition"><span>

take the gradient of the output with respect to the input, and measure what they produce

</span></div>

this doesn't really handle gradients that are saturated (i.e. the changes were big, but once you get big enough the function stops changing). So instead, we could consider [integrated gradients](#feature-importance):

<div class="definition"><span>

For function \\(f\\) under test, and feature perturbation \\(x \in [x\_0, x\_1]\\), we compute:

\begin{equation}
\frac{1}{x\_1 - x\_0} \int\_{x\_0}^{x\_1} f\qty(x) \dd{x}
\end{equation}

</span></div>


### shapley values {#shapley-values}

One problem with [sensitivity analysis](#feature-importance) is that competing feature effects neutralizes: that is, if \\(z = x \vee y\\), preturbing \\(x\\) or \\(y\\) alone will not have any influence on the value of \\(z\\). [shapley values](#shapley-values) helps us understand the subsets of features.

<div class="definition"><span>

The Shapley Value is the expectation of the difference across all possible subsets of features.

1. randomly fix a subset and randomly sample values in the subset
2. compute the target value
3. repeat 1-2 with the feature under test included in the randomly sampled subset
4. compute the difference between the case where you included and not included the target feature
5. compute the expectation of 4

</span></div>


## surrogate models {#surrogate-models}

see
7 changes: 7 additions & 0 deletions content/posts/KBhfailure_mode_characterization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
+++
title = "failure mode characterization"
author = ["Houjun Liu"]
draft = false
+++

take a bunch of failure trajectories, and cluster them; can possibly do it with STL systems
21 changes: 21 additions & 0 deletions content/posts/KBhgraph_isomorphism_is_in_np.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
+++
title = "Graph Isomorphism is in NP"
author = ["Houjun Liu"]
draft = false
+++

Recall the definition of graph : if you can relabel \\(G\\) to get \\(G'\\), that they are the same up to relabling.

<div class="theorem"><span>

\begin{equation}
\text{GISO} = \qty {\langle G,G' \rangle \mid G \cong G'}
\end{equation}

</span></div>

<div class="proof"><span>

Because the prover can just give the relabeling.

</span></div>
Loading

0 comments on commit b37d55b

Please sign in to comment.