Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[xdoctest] reformat example code with google style in No.324, 333, 336, 353 #57565

Merged
merged 6 commits into from
Oct 11, 2023

Conversation

longranger2
Copy link
Contributor

PR types

Others

PR changes

Others

Description

修改如下文件的示例代码为新的格式,并通过 xdoctest 检查:

  • paddle/fluid/pybind/parallel_executor.cc
  • python/paddle/base/dygraph/base.py
  • python/paddle/base/executor.py
  • paddle/fluid/pybind/eager_method.cc

@sunzhongkai588 @SigureMo @megemini


@paddle-bot
Copy link

paddle-bot bot commented Sep 20, 2023

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Sep 20, 2023
Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CI 里的问题也需要看一下

data = paddle.to_tensor(data)
x = linear(data)
print(x.numpy())
>>> paddle.disable_static()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

默认不是动态图么,为什么需要这个?是使用的 convert_doctest 太旧了还是?

y = x[1]
print(y.is_contiguous())
>>> x = paddle.to_tensor([1, 2, 3])
>>> y = x[1]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

怎么下面那个 is_contiguous 直接删了?这不就是 is_contiguous 的 docstring 么?

build_strategy = static.BuildStrategy()
build_strategy.fuse_adamw = True
)DOC")
>>> import paddle
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

上面需要空一行

import paddle
print(paddle.in_dynamic_mode()) # True, dynamic mode is turn ON by default since paddle 2.0.0
>>> import paddle
>>> print(paddle.in_dynamic_mode()) # True, dynamic mode is turn ON by default since paddle 2.0.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

后面注释里的 True 可以删掉了

paddle.enable_static()
print(paddle.in_dynamic_mode()) # False, Now we are in static graph mode
>>> paddle.enable_static()
>>> print(paddle.in_dynamic_mode()) # False, Now we are in static graph mode
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

Comment on lines 487 to 490
>>> print(tmp.gradient() is None)
>>> print(l0.weight.gradient() is None)
True
False
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

输出位置不对吧,489 不是 487 的输出么

Comment on lines 701 to 704
>>> print(test_dygraph_grad(create_graph=False))
>>> print(test_dygraph_grad(create_graph=True))
[2.]
[4.]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上,输出要跟随相应的语句

... c = np.array([2+1j, 2])
... z = base.dygraph.to_variable(c)
... z.numpy()
... z.dtype
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里用 print 吧,不然会调用 __repr__ 而得到 VarType 而不是 paddle.float32 这样的字符串

... y.shape
...
... y = base.dygraph.to_variable(((0.1, 1.2), (2.2, 3.1), (4.9, 5.2)), dtype='int32')
... y.shape
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

其他几个也都加下 print 吧

>>> x 这种比较合适不写 print

... x 就很奇怪了

# [-0.24635398 -0.13003758]
# [-0.49232286 -0.25939852]
# [-0.44514108 -0.2345845 ]]
>>> # required: gpu
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没改成 doctest directive

@megemini 顺师傅有时间在 bad statement 里加一个这个?好像现在只需要加一个正则规则就行了?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PR: #57578 🤗

@luotao1 luotao1 added the HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务 label Sep 21, 2023
@longranger2
Copy link
Contributor Author

done

>>> import paddle

>>> x = paddle.to_tensor(1.0, stop_gradient=False)
>>> clone_x = x.clone()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

要加一下 clone_x.retain_grads(),不然 clone_x.stop_gradient 是 False,而 clone_x.grad 是 None,作为示例没有意义~


print(y.grad) # [1., 1., 1.]
>>> print(y.grad) # [1., 1., 1.]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

要有输出~

>>> print(x.grad)
Tensor(shape=[1], dtype=float32, place=Place(cpu), stop_gradient=False, [20.])

>>> print(detach_x.grad)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

原来的注释 'stop_gradient=True' by default 是说明性文字,可以保留 ~

Comment on lines 1171 to 1176
>>> print(underline_x)
- place: Place(cpu)
- shape: [1]
- layout: NCHW
- dtype: 5
- data: [1]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. # a Dense Tensor info 注释保留
  2. 这里的结果跟 aistudio 的不一样,麻烦确认一下 :
  - place: Place(cpu)
  - shape: [1]
  - layout: NCHW
  - dtype: float32
  - data: [1]


x = paddle.to_tensor([1, 2, 3])
print(x.data_ptr())
>>> x = paddle.to_tensor([1, 2, 3])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的 print(x.data_ptr()) 不要省,可以把输出 skip 掉 ~

Comment on lines 530 to 536
... import paddle
... import paddle.static as static
...
... paddle.enable_static()
...
... build_strategy = static.BuildStrategy()
... build_strategy.build_cinn_pass = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里要用 >>> ,另外,空格留的多了~

是不是用的 convert-doctest 转的?工具在一些错误缩进的时候无法处理 ... ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

是的,使用convert-doctest 转的

Comment on lines 825 to 831
... import paddle
... import paddle.static as static
...
... paddle.enable_static()
...
... build_strategy = static.BuildStrategy()
... build_strategy.fuse_broadcast_ops = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>>

Comment on lines 740 to 754
>>> print(test_dygraph_grad(None))
7.

>>> # dy1 = [1], dy2 = [4]
>>> print(test_dygraph_grad([None, grad_value]))
[16.]

>>> # dy1 = [4], dy2 = [1]
>>> print(test_dygraph_grad([grad_value, None]))
[19.]

>>> # dy1 = [3], dy2 = [4]
>>> grad_y1 = paddle.to_tensor(3.0)
>>> print(test_dygraph_grad([grad_y1, grad_value]))
[24.]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aistudio 上面测试,这里的输出只有数字,不是 list,请确认一下 ~

Comment on lines 903 to 908
array(1, dtype=float32)
array(-1, dtype=float32)
array([2.+1.j, 2.+0.j])
complex128
[3, 2]
[3, 2]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aistudio 上用 develop 的 paddle,输出是:

1.0
-1.0
[2.+1.j 2.+0.j]
paddle.complex128
[3, 2]
[3, 2]

请确认一下 ~

Comment on lines 684 to 688
>>> import paddle.base as base
>>> place = base.CPUPlace()
>>> exe = base.executor(place)
>>> data = np.array(size=(100, 200, 300))
>>> np_outs = map(lambda x: base.executor._as_lodtensor(x, place), data)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 需要 import numpy
  2. base.Executor
  3. np.array 不对 ~

@longranger2
Copy link
Contributor Author

@megemini done

>>> y = x[1]
>>> y = y.contiguous()
>>> print(y)
ensor(shape=[], dtype=int64, place=Place(cpu), stop_gradient=True, 2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

漏了一个 T

Comment on lines +740 to +754
>>> print(test_dygraph_grad(None))
7.

>>> # dy1 = [1], dy2 = [4]
>>> print(test_dygraph_grad([None, grad_value]))
16.

>>> # dy1 = [4], dy2 = [1]
>>> print(test_dygraph_grad([grad_value, None]))
19.

>>> # dy1 = [3], dy2 = [4]
>>> grad_y1 = paddle.to_tensor(3.0)
>>> print(test_dygraph_grad([grad_y1, grad_value]))
24.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SigureMo 这里是我这边 patch float 精度的时候有问题,对于字符串开头就是数字的情况,之前没有考虑到,已经提交 PR:#57806 🤕

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#57806 已合入,这个 PR 可以 merge 一下 develop 了

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Oct 5, 2023

Sorry to inform you that 592d353's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@longranger2 longranger2 requested a review from megemini October 10, 2023 10:52
Copy link
Member

@SigureMo SigureMo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTMeow 🐾

@megemini 顺师傅可以再 review 一下~

Copy link
Contributor

@megemini megemini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM ~ 辛苦 🫡

@luotao1 luotao1 merged commit a392e4d into PaddlePaddle:develop Oct 11, 2023
Frida-a pushed a commit to Frida-a/Paddle that referenced this pull request Oct 14, 2023
…6, 353 (PaddlePaddle#57565)

* fix sample codes

* fix bug

* fix bug

* fix bug

* Update eager_method.cc
jiahy0825 pushed a commit to jiahy0825/Paddle that referenced this pull request Oct 16, 2023
…6, 353 (PaddlePaddle#57565)

* fix sample codes

* fix bug

* fix bug

* fix bug

* Update eager_method.cc
@longranger2 longranger2 deleted the xdoctest8 branch October 29, 2023 03:00
danleifeng pushed a commit to danleifeng/Paddle that referenced this pull request Nov 14, 2023
…6, 353 (PaddlePaddle#57565)

* fix sample codes

* fix bug

* fix bug

* fix bug

* Update eager_method.cc
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributor External developers HappyOpenSource Pro 进阶版快乐开源活动,更具挑战性的任务
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants