Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[xdoctest] reformat example code with google style in No. 261-263 #57703

Closed
wants to merge 7 commits into from

Conversation

kkk459
Copy link
Contributor

@kkk459 kkk459 commented Sep 25, 2023

PR types

others

PR changes

others

Description

#55629

@paddle-bot
Copy link

paddle-bot bot commented Sep 25, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Sep 25, 2023
@luotao1 luotao1 added the HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 label Sep 26, 2023
@megemini
Copy link
Contributor

megemini commented Sep 26, 2023

@SigureMo 对于 use plain sample code style 的日志,需要把 info 改为 warning ~

日志分离之后,这部分的日志没改,我先改一下 ~


PR:#57775 😅

Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

先把 fleet 里面没修改的示例改一下,然后重跑 CI 看看结果吧 ~

import paddle.distributed.fleet as fleet
strategy = fleet.DistributedStrategy()
fleet.init(strategy=strategy)
>>> import paddle
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

示例部分要写在 Examples: 下面。另外,上面的 code-example1 也要改~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. code-block:: python 之前写个 Examples: 吧 ~ 可以参考后面的代码 ~

import paddle.distributed.fleet as fleet
strategy = fleet.DistributedStrategy()
fleet.init(strategy=strategy)
>>> import paddle
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. code-block:: python 之前写个 Examples: 吧 ~ 可以参考后面的代码 ~

Comment on lines +62 to +70
>>> if pre_layer_norm:
... out = layer_norm1(x)
>>> else:
... out = x
>>> out = linear2(dropout1(activation(linear1(src))))
>>> if add_residual:
... out = residual + dropout2(out)
>>> else:
... out = dropout2(out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里把 .. code-block:: python 改为 .. code-block:: text 吧~

另外,else 属于复合语句,用 ... 代替 >>>

Comment on lines +341 to +342
>>> # [2, 4, 128]
>>> print(output.shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>> print(output.shape)
(2, 4, 128)

需要把原有的注释方式改为输出~

Comment on lines +500 to +526
>>> residual = x
>>> if pre_layer_norm:
... out = layer_norm(x)
>>> else:
... out = x
>>> # compute q, k, v
>>> out = matmul(out, qkv_weight) + qkv_bias
>>> out = transpose(out, perm=[2, 0, 3, 1, 4])
>>> # extract q, k and v from out
>>> q = out[0:1,::] * (head_dim ** -0.5)
>>> k = out[1:2,::]
>>> v = out[2:3,::]
>>> out = matmul(q, k, transpose_y=True)
>>> out = out + attn_mask
>>> out = softmax(out)
>>> out = dropout(out)
>>> out = matmul(out, v)
>>> # combine heads
>>> out = transpose(out, perm=[0, 2, 1, 3])
>>> # project to output
>>> out = linear(out)
>>> if add_residual:
... out = residual + dropout(out)
>>> else:
... out = dropout(out)
>>> if not pre_layer_norm:
... out = layer_norm(out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同样,改一下 .. code-block:: python

else 的提示符 ~

Comment on lines +609 to +610
>>> # [2, 4, 128]
>>> print(output.shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

输出改一下 ~

Comment on lines +914 to +944
>>> if pre_layer_norm:
... out = layer_norm(x)
... out = qkv_linear(out) + qkv_bias
>>> else:
... out = qkv_linear(x) + qkv_bias
>>> out = transpose(out, perm=[2, 0, 3, 1, 4])
>>> # extract q, k and v from out.
>>> q = out[0:1, ::]
>>> k = out[1:2, ::]
>>> v = out[2:3, ::]
>>> out = q * k^t
>>> out = attn_mask + out
>>> out = softmax(out)
>>> out = dropout(out)
>>> out = out * v
>>> out = transpose(out, perm=[0, 2, 1, 3])
>>> out = linear(out)
>>> if pre_layer_norm:
... out = x + dropout(out + bias)
>>> else:
... out = layer_norm(x + dropout(out + bias))

>>> residual = out;
>>> if pre_layer_norm:
... out = ffn_layer_norm(out)
>>> out = ffn1_linear(out)
>>> out = dropout(activation(out + ffn1_bias))
>>> out = ffn2_linear(out)
>>> out = residual + dropout(out + ffn2_bias)
>>> if not pre_layer_norm:
... out = ffn_layer_norm(out)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同样,改一下 .. code-block:: python

else 提示符 ~

Comment on lines +1043 to +1044
>>> # [2, 4, 128]
>>> print(output.shape)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

输出 ~

@@ -290,7 +291,7 @@ def fused_bias_dropout_residual_layer_norm(

.. code-block:: python

y = layer_norm(residual + dropout(bias + x))
>>> y = layer_norm(residual + dropout(bias + x))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

把上面的 .. code-block:: python 改为 .. code-block:: text

这里应该是算法说明部分,可以不当作示例使用 ~

Comment on lines +1075 to +1080
>>> import paddle
>>> import paddle.distributed.fleet as fleet
>>> fleet.init(is_collective=True)
>>> strategy = fleet.DistributedStrategy()
>>> optimizer = paddle.optimizer.SGD(learning_rate=0.001)
>>> optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>> import paddle
>>> import paddle.distributed.fleet as fleet
>>> fleet.init(is_collective=True)
>>> strategy = fleet.DistributedStrategy()
>>> linear = paddle.nn.Linear(10, 10)
>>> optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters=linear.parameters())
>>> optimizer = fleet.distributed_optimizer(optimizer, strategy=strategy)

根据错误提示,应该给 optimizer 加上参数 ~ 或者用 static ~

Comment on lines +62 to +72
>>> print(y_train)
Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[2., 0., 6.],
[0., 0., 0.]])

>>> m.eval() # switch the model to test phase
>>> y_test = m(x)
>>> print(y_test)
Tensor(shape=[2, 3], dtype=float32, place=Place(cpu), stop_gradient=True,
[[1., 2., 3.],
[4., 5., 6.]])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dropout 可以试一下加 seed ~ 如果 seed 也不能固定输出的话,可以把这部分的输出 skip 包裹一下 ~

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Oct 4, 2023

Sorry to inform you that 361fa9d's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

需要根据 review 意见进行修改,并 merge 一下最新的 develop 分支

@luotao1
Copy link
Contributor

luotao1 commented Oct 17, 2023

@KongAKun 记得更新PR哈

@luotao1
Copy link
Contributor

luotao1 commented Oct 18, 2023

close due to the following PR is merged:

@luotao1 luotao1 closed this Oct 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants