Compare commits

...

209 Commits

Author SHA1 Message Date
upstreamdata
ca77573624 update pyproject.toml 2022-07-07 14:40:07 -06:00
upstreamdata
3ec147990b added power limit vs power draw in get_data 2022-07-07 14:34:21 -06:00
upstreamdata
082240bdb6 add some missing imports 2022-07-07 14:30:19 -06:00
UpstreamData
7a7fc2c5a6 Update README.md 2022-07-07 08:09:00 -06:00
UpstreamData
dcc3e07998 Dev (#12)
* changed over to package format and removed tools, added poetry

* reformat into miner_interface project

* add dist to .gitignore

* update readme and finish reformatting

* Added couple missing imports. (#13)

* change name to pyasic

Co-authored-by: upstreamdata <brett@upstreamdata.ca>
Co-authored-by: Mika Impola <mika@impola.fi>
2022-07-07 07:57:34 -06:00
UpstreamData
5261b00aad fixed logfile in settings to allow for adding or removing a logfile 2022-06-22 13:28:37 -06:00
UpstreamData
f18d37a19e add gitignore and fix a small bug with settings if the file doesn't exist 2022-06-14 09:42:20 -06:00
UpstreamData
7c3af3da41 fixed a bug with old bosminers not updating properly 2022-06-10 13:21:31 -06:00
UpstreamData
8948af55f2 fixed a small bug with bosminer MAC 2022-06-10 11:30:24 -06:00
UpstreamData
dd8fe41ad1 added estimate env temp for X19 and change format of X19 and X17 files 2022-06-10 11:22:41 -06:00
UpstreamData
198eedcd43 added env_temp for whatsminers 2022-06-10 11:03:09 -06:00
UpstreamData
f7309decdb finish adding support for a bunch of new avalonminers 2022-06-09 14:38:51 -06:00
UpstreamData
078579d8e1 add a ton of new avalonminers to be added to miner factory later. 2022-06-09 14:10:12 -06:00
UpstreamData
39eeb13409 improved the implementation of fault lights on avalonminers by fixing a bad implementation of ascset. 2022-06-09 13:49:15 -06:00
UpstreamData
dfccd67ccb added fault lights to 1066 miners, and framework for configuring (although it may not work, the documentation implementation is broken) 2022-06-08 15:43:34 -06:00
UpstreamData
10949225c0 fix generate report pie chart to fix overlapping labels when all boards are working 2022-06-08 10:59:52 -06:00
UpstreamData
3a60a3584a added support for avalon 1066 miners 2022-06-08 10:42:19 -06:00
UpstreamData
480aab550c added advanced config file generator 2022-06-07 15:55:43 -06:00
UpstreamData
fa83e61249 fix a bug with config tool generating configs 2022-06-07 14:47:21 -06:00
UpstreamData
2f3411e12d add documentation for MinerConfig 2022-06-07 13:17:44 -06:00
UpstreamData
3e7311687e Update README.md 2022-06-07 12:01:31 -06:00
UpstreamData
bc2d549ce5 moved MinerConfig to config.__init__.py and removed old config methods 2022-06-07 11:50:36 -06:00
UpstreamData
3d31d89c9e update dev-requirements.txt 2022-06-07 11:43:00 -06:00
UpstreamData
15fc27e6fa added configuration for X19 miners 2022-06-07 11:12:26 -06:00
UpstreamData
943ebc77a1 switch braiins miners over to using new config dataclass 2022-06-07 10:49:41 -06:00
UpstreamData
733437ef03 create basic config dataclass to be used to configure miners 2022-06-06 16:05:09 -06:00
UpstreamData
b444245e98 added new whatsminer types to miner factory 2022-06-06 10:09:11 -06:00
UpstreamData
481d31a0f1 added more new whatsminer types 2022-06-06 10:06:17 -06:00
UpstreamData
264db3bdd6 fix a bug with whatsminer M21S missing import 2022-06-06 09:41:10 -06:00
UpstreamData
d292b9c195 improved whatsminer handling, and added VF20 to miner dict 2022-06-06 09:26:38 -06:00
UpstreamData
dce25a679f added new miner type M30S+VF20 2022-06-06 09:17:42 -06:00
UpstreamData
c903631742 improved build process 2022-06-06 09:17:22 -06:00
Colin Crossman
e70bfdc886 Fix indent issue that caused missing MAC addresses (#10) 2022-06-05 15:50:07 -06:00
UpstreamData
8e1803add1 made slight optimizations to get_data and the way the miner gets mac data 2022-06-03 15:30:09 -06:00
UpstreamData
7d61056ea3 added whatsminer M30S+ VE40 2022-06-03 15:00:04 -06:00
UpstreamData
0d497baa45 added mac for M20 series 2022-06-03 14:55:03 -06:00
UpstreamData
d3a71c5a93 added mac addresses to get_data 2022-06-03 14:29:10 -06:00
UpstreamData
895a5b7ac8 fixed more bugs with whatsminers and added more versions 2022-06-03 11:20:34 -06:00
UpstreamData
7a5a0b287c fixed a bug with some versions of whatsminer and improved logging 2022-06-03 09:35:55 -06:00
UpstreamData
c7d73276c8 fixed a small bug with sorting 2022-06-03 08:59:15 -06:00
UpstreamData
4bbb9d0b08 added a basis for configuration of X17 and X19 miners by getting pool info from config file. 2022-06-02 16:06:36 -06:00
UpstreamData
3ee49e6fd7 fixed a warning with ylim being set to 0 2022-06-02 14:52:17 -06:00
UpstreamData
dcd3e99d73 added interval to recording 2022-06-02 14:25:55 -06:00
UpstreamData
64018cdad8 completed basic recording functionality 2022-06-02 14:17:08 -06:00
UpstreamData
e7d269008c added the basics of the recording functionality, just need to write out to file. 2022-06-02 11:08:14 -06:00
UpstreamData
7dfe25e5d2 added base for recording miner data to pdf file. 2022-06-01 16:13:30 -06:00
UpstreamData
382f9cff76 added reboot command for X19 and X17 models on BMMiner 2022-06-01 14:02:34 -06:00
UpstreamData
a5195ff1db fix a bug with testbench where toolbox finds braiins but bench does not 2022-06-01 11:44:07 -06:00
UpstreamData
b1ec726d18 added some docstrings to data 2022-06-01 11:22:30 -06:00
UpstreamData
5ae2cb2b22 fixed a bug with not all table data getting reset on data update 2022-06-01 11:22:12 -06:00
UpstreamData
472a15f4ca added fault light function for X17 BMMiner models 2022-06-01 10:54:45 -06:00
UpstreamData
7cc7973587 fixed a bug with some BOS S17e not returning data frm devdetails and fans 2022-06-01 10:19:58 -06:00
UpstreamData
ab964e4c88 fixed a bug with sorting by chip % 2022-06-01 08:15:35 -06:00
UpstreamData
4087874b4a added get hostname to X19 miners 2022-05-31 17:05:05 -06:00
UpstreamData
844deec0d3 add fault light command to X19 miners 2022-05-31 16:54:56 -06:00
UpstreamData
d36eef4c33 switched to httpx 2022-05-31 16:08:17 -06:00
UpstreamData
69d4ee5570 Revert "add .readthedocs.yaml"
This reverts commit e7b01ccdab.
2022-05-31 13:23:41 -06:00
UpstreamData
e6d3ec01fe Merge remote-tracking branch 'origin/master' 2022-05-31 13:18:59 -06:00
UpstreamData
e7b01ccdab add .readthedocs.yaml 2022-05-31 13:18:52 -06:00
UpstreamData
38506903ea fixed an issue with BMMiner get data and bosminer get data not identifying correct board number. 2022-05-31 08:45:49 -06:00
UpstreamData
c9a1560052 Merge remote-tracking branch 'origin/master' 2022-05-30 14:20:48 -06:00
UpstreamData
88f8ff10b7 fixed a bug with sorting 2022-05-30 14:20:37 -06:00
UpstreamData
11d38c9c3b fixed a bug with sorting 2022-05-30 14:19:57 -06:00
UpstreamData
0082037f45 add dev-requirements and remove cx-freeze from requirements 2022-05-30 13:29:47 -06:00
UpstreamData
dd5ccafa1e added listener function to cfg util 2022-05-30 13:27:56 -06:00
UpstreamData
739126935a fixed some bugs with differing version of BTMiner and different versions of M30S++ having different chip counts 2022-05-30 11:13:37 -06:00
UpstreamData
5c850a43a9 ignore errors with S19 multicommands 2022-05-30 09:46:05 -06:00
UpstreamData
24b037f273 fixed a bug with bmminer stats 2022-05-30 09:40:30 -06:00
UpstreamData
f847700c05 fixed a bug with antminers not reporting type because of fans in testbench, and added a long running get data for long tests 2022-05-27 11:54:51 -06:00
UpstreamData
69820dd9d2 slightly improved getting data from bmminer X9, X17, and X19 with an improvement to finding offset 2022-05-27 11:47:44 -06:00
UpstreamData
ad4b710cb7 miner is no longer cached in miner factory if it is unknown 2022-05-27 11:05:35 -06:00
UpstreamData
c53c18654b improved bmminer with a fan and board offset 2022-05-27 11:01:25 -06:00
UpstreamData
18797f4b56 added S9 data for bmminer 2022-05-27 10:41:41 -06:00
UpstreamData
e86c93e287 fixed refreshing data 2022-05-26 16:19:15 -06:00
UpstreamData
89cfde28f5 added chips for M30S 2022-05-26 16:05:48 -06:00
UpstreamData
0f2a867828 fix wrong import from collections 2022-05-26 15:53:38 -06:00
UpstreamData
4f5aef2d45 update some type hints and comments in miner factory, and remove some uneeded imports 2022-05-26 15:51:57 -06:00
UpstreamData
96801f93d1 made fault lights and async generator to make them much faster 2022-05-26 15:41:41 -06:00
UpstreamData
a8ce73c3d6 fixed an issue with the windows event loop not working with asyncio.create_subprocess_shell 2022-05-26 15:25:04 -06:00
UpstreamData
513dd2b981 fixed abug with testbench not removing miners when there were 0 online 2022-05-26 13:48:49 -06:00
UpstreamData
c35b30e949 fixed formatting issues 2022-05-26 13:23:32 -06:00
UpstreamData
942f2a1c8d improved type hinting and formatting in miner factory 2022-05-26 12:14:28 -06:00
UpstreamData
9078df680e added get_data to web_monitor dashboard 2022-05-26 11:15:01 -06:00
UpstreamData
527997cc58 fixed a bug with refreshing data 2022-05-26 11:13:39 -06:00
UpstreamData
41433bcaf5 change hashrate data to 1m as it seems more consistent, and add get_data to web monitor 2022-05-26 11:12:31 -06:00
UpstreamData
3451b88669 added temps and fans for bmminer and cgminer 2022-05-26 10:57:52 -06:00
UpstreamData
a42af2764e fixed a bug with ideal chips not getting set fast enough 2022-05-26 10:43:10 -06:00
UpstreamData
baaad73eb8 fixed a bug with pool prefix not getting removed when getting data 2022-05-26 10:39:39 -06:00
UpstreamData
34c9f85098 added btminer fan data and per chip temps 2022-05-26 10:36:39 -06:00
UpstreamData
d6638fa4d2 added fan counts to miners, and added more data to bosminer and miner data 2022-05-26 10:26:40 -06:00
UpstreamData
0f51487d3f added miner data to base miner 2022-05-26 08:41:51 -06:00
UpstreamData
3a11b173c3 added chip percent to config tool 2022-05-25 15:02:48 -06:00
UpstreamData
568f86700b improved X19 miner scan speed and implemented miner data in miners 2022-05-25 14:44:23 -06:00
UpstreamData
3b702aac2c improved handling of MinerData by improving dataclass 2022-05-25 14:01:52 -06:00
UpstreamData
6fbd9faffd updated network test to use unittest 2022-05-25 11:49:38 -06:00
UpstreamData
9eb2259aae removed extra print statements and a loop that wasnt needed in miner factory 2022-05-25 09:02:37 -06:00
UpstreamData
149c386a4c fix a bug with X17 not responding 2022-05-25 08:56:02 -06:00
UpstreamData
726e7ff0f0 add support for basic S9is 2022-05-24 14:43:22 -06:00
UpstreamData
87a690eb00 create basic dataclass for miner data 2022-05-20 10:12:51 -06:00
UpstreamData
fd5dba4036 fixed some bugs and improved testbench look 2022-05-19 15:54:29 -06:00
UpstreamData
e54847337a update testbench color palette 2022-05-19 15:31:12 -06:00
UpstreamData
3ff43c3ccd removed old tools that will no longer work 2022-05-19 11:56:12 -06:00
UpstreamData
ec5563f2f0 slightly improved network functionality and added tests for network 2022-05-19 11:55:38 -06:00
UpstreamData
40f14876cc made miner count a fixed position bar 2022-05-19 11:05:40 -06:00
UpstreamData
6abfe8a503 switch testbench to dark mode and add miner count 2022-05-19 10:47:36 -06:00
Dewey Cox
0a4d52ef03 fixed a bug with matplotlib.pyplot.subplots() causing resizing of windows 2022-05-19 09:11:28 -06:00
Colin Crossman
e4207e0120 Allow MinerFactory to take a list of discrete IPs (#7) 2022-05-18 20:16:47 -06:00
UpstreamData
ed89476866 fixed a bug with temp not displaying on the cfg tool 2022-05-18 12:50:44 -06:00
UpstreamData
7f7964526c fixed some bugs with scanning being too fast to get data and killing the tasks 2022-05-18 12:13:20 -06:00
UpstreamData
85b282740a update scanning in web interface 2022-05-18 11:39:14 -06:00
UpstreamData
8cbf3a20a3 make miner ips a true ip address, and allow sorting miners using __lt__ and __gt__ 2022-05-18 11:34:39 -06:00
UpstreamData
8ebcbd3c33 vastly improved scanning in web monitor 2022-05-18 11:12:38 -06:00
UpstreamData
c3e285a9ee fix some bugs in web monitor 2022-05-18 11:06:21 -06:00
UpstreamData
9f19b42de5 fixed some bugs with older versions of braiins OS +, and fixed a bug in testbench 2022-05-17 13:09:10 -06:00
UpstreamData
3d265e823b update README.md 2022-05-17 09:08:50 -06:00
UpstreamData
5e6bc8c8ef add mac addresses for bosminer, and reformat 2022-05-17 08:51:56 -06:00
UpstreamData
871499b77f fix some bugs in miner listener 2022-05-16 16:22:05 -06:00
UpstreamData
117a161fd5 added miner listener to listen for ip reporting 2022-05-16 16:16:22 -06:00
UpstreamData
40bacbf41c add getting mac for whatsminers 2022-05-16 15:01:04 -06:00
UpstreamData
e091863aa7 fixed a bug with suspended whatsminers 2022-05-16 14:06:23 -06:00
UpstreamData
85e8ac63f1 update light column 2022-05-16 13:46:16 -06:00
UpstreamData
a5252e3a84 update README.md 2022-05-16 12:30:04 -06:00
UpstreamData
404d6590db improved resizing and light 2022-05-16 11:55:13 -06:00
UpstreamData
1d04399daf made the window of the cfg util resizeable 2022-05-16 11:35:45 -06:00
UpstreamData
03ebcacca5 added an old version of Bosminer for non plus miners to be able to update 2022-05-16 11:24:32 -06:00
UpstreamData
75934fd7fe fixed another bug with testbench putting miners into recovery mode 2022-05-16 10:50:07 -06:00
UpstreamData
bbeca15799 attempted to fix a bug with testbench 2022-05-16 10:40:28 -06:00
UpstreamData
45befb569b updated a bunch of miner chip counts, added S19a, and fixed a bug with whatsminer M30S++ 2022-05-16 08:42:26 -06:00
UpstreamData
61334ed99e Merge remote-tracking branch 'origin/combine_board_cfg_util' into combine_board_cfg_util 2022-05-13 15:37:27 -06:00
UpstreamData
2bf059df01 remove board util 2022-05-13 15:35:42 -06:00
UpstreamData
9c2de26182 switched over to hashrate av to be more accurate when getting data 2022-05-13 15:35:42 -06:00
UpstreamData
714983cddc added exporting reports from config tool 2022-05-13 15:35:42 -06:00
UpstreamData
191f1d24b9 improved send command functionality with a generator and added progress to it 2022-05-13 15:35:42 -06:00
UpstreamData
5a0bafb964 fixed a bug for scanning S9s with no boards for testing 2022-05-13 15:35:42 -06:00
UpstreamData
67aedd319d update README.md 2022-05-13 15:35:41 -06:00
UpstreamData
44012c50d6 finished updating the miner type handlers to create subclasses of the backend and type to create a miner, each of which handles its own data to simplify creation of new miner types 2022-05-13 15:35:41 -06:00
UpstreamData
06540efc98 changed the way antminers and whatsminers are handled in the factory to allow for more precision on chip counts 2022-05-13 15:35:41 -06:00
UpstreamData
9d0d1a24d9 added S19 board handler 2022-05-13 15:35:41 -06:00
UpstreamData
8568f91482 added btminer board data 2022-05-13 15:35:41 -06:00
UpstreamData
64918e5552 added bosminer board data 2022-05-13 15:35:41 -06:00
UpstreamData
53d5ecd04a added images for boards 2022-05-13 15:35:41 -06:00
UpstreamData
1b0e80a418 added basic framework for boards in config util 2022-05-13 15:35:41 -06:00
UpstreamData
9ad506a313 remove board util 2022-05-13 15:35:11 -06:00
UpstreamData
18c4bbd09c switched over to hashrate av to be more accurate when getting data 2022-05-13 15:31:32 -06:00
UpstreamData
0d123d5dd8 added exporting reports from config tool 2022-05-13 15:23:51 -06:00
UpstreamData
b9b91293fe improved send command functionality with a generator and added progress to it 2022-05-13 14:28:51 -06:00
UpstreamData
47a702c94c fixed a bug for scanning S9s with no boards for testing 2022-05-13 14:03:27 -06:00
UpstreamData
6d5a288120 update README.md 2022-05-13 11:35:37 -06:00
UpstreamData
038aae95ac finished updating the miner type handlers to create subclasses of the backend and type to create a miner, each of which handles its own data to simplify creation of new miner types 2022-05-13 11:27:56 -06:00
UpstreamData
dd84aede25 changed the way antminers and whatsminers are handled in the factory to allow for more precision on chip counts 2022-05-12 16:42:02 -06:00
UpstreamData
dc8ad271de added S19 board handler 2022-05-12 15:16:05 -06:00
UpstreamData
b78c1cdca5 added wattage to configure tab 2022-05-12 14:42:37 -06:00
UpstreamData
0eb7ced932 added btminer board data 2022-05-12 13:20:57 -06:00
UpstreamData
8e58f4492f added bosminer board data 2022-05-12 13:12:54 -06:00
UpstreamData
95fb32de19 added images for boards 2022-05-12 12:35:33 -06:00
UpstreamData
5145dc19f8 added basic framework for boards in config util 2022-05-12 11:29:28 -06:00
UpstreamData
1808d62bba fix references to some table headers 2022-05-11 14:42:47 -06:00
UpstreamData
97ef4dfe37 fixed some bugs with testbench on the latest version 2022-05-11 14:28:53 -06:00
UpstreamData
174a132e75 attempt to fix a bug in testbench 2022-05-11 14:04:05 -06:00
UpstreamData
84d6e58ebe change progress bar completion animation 2022-05-11 13:24:54 -06:00
UpstreamData
e9a1483e5f fixed some small bugs with whatsminers and progress bar 2022-05-11 13:20:14 -06:00
UpstreamData
4eb51eed20 update CFG-Util-README.md 2022-05-11 10:51:36 -06:00
UpstreamData
066fc1a4b3 changed Temperature to Temp and added more spacing to pool user 2022-05-11 08:43:32 -06:00
UpstreamData
cc24236c0a update requirements.txt 2022-05-11 08:40:00 -06:00
UpstreamData
564cd42eae added ctrl c and ctrl a functionality to the tables 2022-05-11 08:37:17 -06:00
UpstreamData
8677eff491 moved miner count and hashrate to top of tool 2022-05-10 14:00:50 -06:00
UpstreamData
63a21ea9aa updated formatting on scrollbars 2022-05-10 13:53:18 -06:00
UpstreamData
1c9d3dc84d updated formatting on page 2022-05-10 13:44:08 -06:00
UpstreamData
0dacd3d294 changed sorting to show up on the table headers 2022-05-10 11:51:26 -06:00
UpstreamData
6fa74613b4 updated look of CFG util 2022-05-10 11:13:27 -06:00
UpstreamData
f7fb7a3acb update requirements.txt 2022-05-09 10:25:25 -06:00
UpstreamData
666c5bfc64 added new text buttons to show total hashrate and current sort key 2022-05-09 10:24:48 -06:00
UpstreamData
1f8d92f6bb fixed some bugs with sorting 2022-05-09 09:59:48 -06:00
UpstreamData
ef336a9e23 added asyncio event loop policy update to fix some bugs 2022-05-09 09:20:21 -06:00
UpstreamData
7fe6fd47fb added sorting to command table (Tree) 2022-05-09 09:14:32 -06:00
UpstreamData
91a0298d96 fix a bug where unknown miners would break configuration 2022-05-06 16:29:07 -06:00
UpstreamData
ed3d8fc815 Merge branch 'pyqt_gui_cfg_util' 2022-05-06 16:22:28 -06:00
UpstreamData
4f2d630746 fix formatting on readme 2022-05-06 16:22:13 -06:00
UpstreamData
a8c685a883 switched cfg_util over to new version 2022-05-06 16:20:02 -06:00
UpstreamData
09660e1934 added indicators of what function is running 2022-05-06 16:12:17 -06:00
UpstreamData
c01908ff9a added custom command functionality 2022-05-06 16:01:50 -06:00
UpstreamData
267c388a95 added restarting and rebooting miner backends 2022-05-06 15:52:21 -06:00
UpstreamData
8215d33241 added configuration button 2022-05-06 15:39:18 -06:00
UpstreamData
f4258a304a add importing configuration from miners 2022-05-06 15:14:49 -06:00
UpstreamData
514fafea58 add generate command and change config converters to non async 2022-05-06 15:06:18 -06:00
UpstreamData
e324369fe0 fixed some bugs with sorting when refreshing data and added refreshing data 2022-05-06 14:55:58 -06:00
UpstreamData
3bc9287668 add scan retries to getting data 2022-05-06 13:51:20 -06:00
UpstreamData
d90bf190c5 added reverse sorting and fixed hashrate sorting 2022-05-06 13:34:12 -06:00
UpstreamData
8cc6f66458 added sorting to the 3 main tables 2022-05-06 12:03:43 -06:00
UpstreamData
a2b071af4f fully implemented fault light command 2022-05-06 11:36:57 -06:00
UpstreamData
b7b589802f added avalon1 1066 to board util tentatively 2022-05-06 09:11:08 -06:00
UpstreamData
93912a6df6 fixed a bug with hashrate data not getting sent with some miners 2022-05-06 08:41:04 -06:00
UpstreamData
ffce15f653 fixed some bugs with latest version of toolbox 2022-05-06 08:41:04 -06:00
UpstreamData
725b14e583 added table manager, to manage tables and handle the treeview 2022-05-05 15:53:13 -06:00
UpstreamData
26c6e47f1e added the ability to update the treeview and images in it no longer are as buggy 2022-05-05 14:47:18 -06:00
UpstreamData
51dae7375f added select all button and functionality 2022-05-05 13:48:57 -06:00
UpstreamData
801cfc4ff8 updated some formatting and improved pool return format 2022-05-05 13:02:00 -06:00
UpstreamData
ac3ff7a63e justify hostname to the left 2022-05-05 12:12:10 -06:00
UpstreamData
1b22810f4b fixed formatting on hashrate 2022-05-05 12:07:57 -06:00
UpstreamData
b756c9e4a1 added getting data for btminer 2022-05-05 11:37:04 -06:00
UpstreamData
64b5e6c032 added getting data for bmminer and cgminer 2022-05-05 11:19:11 -06:00
UpstreamData
a13f5dd2d1 fix some bugs and start adding bmminer get_data function 2022-05-05 10:52:18 -06:00
UpstreamData
e6ea8d3e16 added hostname logging and a generalized get dta function for braiins OS 2022-05-05 10:35:47 -06:00
UpstreamData
af37850289 greatly improved functionality of miner factory 2022-05-05 09:17:20 -06:00
UpstreamData
6ecdfa1cf8 scanning now gets data 2022-05-04 16:04:46 -06:00
UpstreamData
c0b21ebc23 fixed scanning to the tree for commands 2022-05-04 15:06:15 -06:00
UpstreamData
184ada417f added tables and basic scanning 2022-05-04 14:44:19 -06:00
UpstreamData
b636860ecb started basic cfg util changes 2022-05-04 13:08:58 -06:00
UpstreamData
0107fdacde update requirements.txt 2022-05-02 10:36:20 -06:00
276 changed files with 6882 additions and 7587 deletions

8
.gitignore vendored Normal file
View File

@@ -0,0 +1,8 @@
venv/
build/
dist/
__pycache__/
pyvenv.cfg
.env/
bin/
lib/

View File

@@ -1,13 +0,0 @@
FROM python:3.10-slim-buster
EXPOSE 80
WORKDIR /minerInterface-web_monitor
COPY tools/web_monitor/requirements.txt .
RUN pip install --no-cache-dir --upgrade -r requirements.txt
COPY . .
CMD ["uvicorn", "tools.web_monitor.app:app", "--host", "0.0.0.0", "--port", "80"]

178
README.md
View File

@@ -1,27 +1,15 @@
# minerInterface
# pyasic
*A set of modules for interfacing with many common types of ASIC bitcoin miners, using both their API and SSH.*
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
## Usage
To use this repo, first download it, create a virtual environment, enter the virtual environment, and install relevant packages by navigating to this directory and running ```pip install -r requirements.txt``` on Windows or ```pip3 install -r requirements.txt``` on Mac or UNIX if the first command fails.
You can also use poetry by initializing and running ```poetry install```
For those of you who aren't comfortable with code and developer tools, there are windows builds of the GUI applications here -> (https://drive.google.com/drive/folders/1DjR8UOS_g0ehfiJcgmrV0FFoqFvE9akW?usp=sharing)
### CFG Util
*CFG Util is a GUI for interfacing with the miners easily, it is mostly self-explanatory.*
To use CFG Util you have 2 options -
1. Run it directly with the file ```config_tool.py``` or import it with ```from cfg_util import main```, then run the ```main()``` function in an asyncio event loop like -
```python
from tools.cfg_util import main
if __name__ == '__main__':
main()
```
2. Make a build of the CFG Util for your system using cx_freeze and ```make_cfg_tool_exe.py```
(Alternatively, you can get a build made by me here -> https://drive.google.com/drive/folders/1nzojuGRu0IszIGpwx7SvG5RlJ2_KXIOv)
1. Open either Command Prompt on Windows or Terminal on Mac or UNIX.
2. Navigate to this directory, and run ```make_cfg_tool_exe.py build``` on Windows or ```python3 make_cfg_tool_exe.py``` on Mac or UNIX.
### Interfacing with miners programmatically
<br>
@@ -47,8 +35,7 @@ A basic script to find all miners on the network and get the hashrate from them
```python
import asyncio
from network import MinerNetwork
from tools.cfg_util.func.parse_data import safe_parse_api_data
from pyasic.network import MinerNetwork
async def get_hashrate():
@@ -60,18 +47,11 @@ async def get_hashrate():
# Miner Network scan function returns Miner classes for all miners found
miners = await miner_network.scan_network_for_miners()
# Each miner will return with its own set of functions, and an API class instance
tasks = [miner.api.summary() for miner in miners]
tasks = [miner.get_data() for miner in miners]
# Gather all tasks asynchronously and run them
data = await asyncio.gather(*tasks)
parse_tasks = []
for item in data:
# safe_parse_api_data parses the data from a miner API
# It will raise an APIError (from API import APIError) if there is a problem
parse_tasks.append(safe_parse_api_data(item, 'SUMMARY', 0, 'MHS 5s'))
# Gather all tasks asynchronously and run them
data = await asyncio.gather(*parse_tasks)
# Print a list of all the hashrates
print(data)
# now we have a list of MinerData, and can get .hashrate
print([item.hashrate for item in data])
if __name__ == '__main__':
@@ -83,8 +63,7 @@ You can also create your own miner without scanning if you know the IP:
```python
import asyncio
import ipaddress
from miners.miner_factory import MinerFactory
from tools.cfg_util.func.parse_data import safe_parse_api_data
from pyasic.miners.miner_factory import Mine~~~~rFactory
async def get_miner_hashrate(ip: str):
@@ -95,77 +74,55 @@ async def get_miner_hashrate(ip: str):
# Wait for the factory to return the miner
miner = await miner_factory.get_miner(miner_ip)
# Get the API data
summary = await miner.api.summary()
# safe_parse_api_data parses the data from a miner API
# It will raise an APIError (from API import APIError) if there is a problem
data = await safe_parse_api_data(summary, 'SUMMARY', 0, 'MHS 5s')
print(data)
data = await miner.get_data()
# print out hashrate
print(data.hashrate)
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(get_miner_hashrate(str("192.168.1.69")))
```
<br>
Or generate a miner directly without the factory:
```python
import asyncio
from miners.bosminer import BOSMiner
from tools.cfg_util.func.parse_data import safe_parse_api_data
async def get_miner_hashrate(ip: str):
# Create a BOSminer miner object
miner = BOSMiner(ip)
# Get the API data
summary = await miner.api.summary()
# safe_parse_api_data parses the data from a miner API
# It will raise an APIError (from API import APIError) if there is a problem
data = await safe_parse_api_data(summary, 'SUMMARY', 0, 'MHS 5s')
print(data)
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(get_miner_hashrate(str("192.168.1.69")))
```
<br>
Or finally, just get the API directly:
```python
import asyncio
from API.bosminer import BOSMinerAPI
from tools.cfg_util.func.parse_data import safe_parse_api_data
async def get_miner_hashrate(ip: str):
# Create a BOSminerAPI object
# Port can be declared manually, if not it defaults to 4028
api = BOSMinerAPI(ip, port=4028)
# Get the API data
summary = await api.summary()
# safe_parse_api_data parses the data from a miner API
# It will raise an APIError (from API import APIError) if there is a problem
data = await safe_parse_api_data(summary, 'SUMMARY', 0, 'MHS 5s')
print(data)
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(get_miner_hashrate(str("192.168.1.69")))
asyncio.new_event_loop().run_until_complete(
get_miner_hashrate(str("192.168.1.69")))
```
Now that you know that, lets move on to some common API functions that you might want to use.
### Common commands:
* Get the data used by the config utility, this includes pool data, wattage use, temperature, hashrate, etc:
* All the data from below commands and more are returned from this in a consistent dataclass. Check out the `MinerData` class in `/data/__init__.py` for more information.
```python
import asyncio
import ipaddress
from pyasic.miners.miner_factory import MinerFactory
async def get_miner_pool_data(ip: str):
# Instantiate a Miner Factory to generate miners from their IP
miner_factory = MinerFactory()
# Make the string IP into an IP address
miner_ip = ipaddress.ip_address(ip)
# Wait for the factory to return the miner
miner = await miner_factory.get_miner(miner_ip)
# Get the data
data = await miner.get_data()
print(data)
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(
get_miner_pool_data(str("192.168.1.69")))
```
* Getting pool data:
```python
import asyncio
import ipaddress
from miners.miner_factory import MinerFactory
from tools.cfg_util.func.parse_data import safe_parse_api_data
from pyasic.miners.miner_factory import MinerFactory
async def get_miner_pool_data(ip: str):
@@ -179,7 +136,7 @@ async def get_miner_pool_data(ip: str):
pools = await miner.api.pools()
# safe_parse_api_data parses the data from a miner API
# It will raise an APIError (from API import APIError) if there is a problem
data = await safe_parse_api_data(pools, 'POOLS')
data = pools["POOLS"]
# parse further from here to get all the pool info you want.
# each pool is on a different index eg:
# data[0] is pool 1
@@ -189,7 +146,8 @@ async def get_miner_pool_data(ip: str):
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(get_miner_pool_data(str("192.168.1.69")))
asyncio.new_event_loop().run_until_complete(
get_miner_pool_data(str("192.168.1.69")))
```
* Getting temperature data:
@@ -203,8 +161,7 @@ A pretty good example of really trying to make this robust is in ```cfg_util.fun
```python
import asyncio
import ipaddress
from miners.miner_factory import MinerFactory
from tools.cfg_util.func.parse_data import safe_parse_api_data
from pyasic.miners.miner_factory import MinerFactory
async def get_miner_temperature_data(ip: str):
@@ -216,14 +173,14 @@ async def get_miner_temperature_data(ip: str):
miner = await miner_factory.get_miner(miner_ip)
# Get the API data
summary = await miner.api.summary()
# safe_parse_api_data parses the data from a miner API
# It will raise an APIError (from API import APIError) if there is a problem
data = await safe_parse_api_data(summary, 'SUMMARY', 0, "Temperature")
data = summary['SUMMARY'][0]["Temperature"]
print(data)
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(get_miner_temperature_data(str("192.168.1.69")))
asyncio.new_event_loop().run_until_complete(
get_miner_temperature_data(str("192.168.1.69")))
```
* Getting power data:
@@ -233,11 +190,11 @@ How about data on the power usage of the miner? This one only works for Whatsmi
```python
import asyncio
import ipaddress
from miners.miner_factory import MinerFactory
from tools.cfg_util.func.parse_data import safe_parse_api_data
from pyasic.miners.miner_factory import MinerFactory
async def get_miner_power_data(ip: str):
data = None
# Instantiate a Miner Factory to generate miners from their IP
miner_factory = MinerFactory()
# Make the string IP into an IP address
@@ -249,19 +206,21 @@ async def get_miner_power_data(ip: str):
# send the command
tunerstatus = await miner.api.tunerstatus()
# parse the return
data = await safe_parse_api_data(tunerstatus, 'TUNERSTATUS', 0, "PowerLimit")
data = tunerstatus['TUNERSTATUS'][0]["PowerLimit"]
else:
# send the command
# whatsminers have the power info in summary
summary = await miner.api.summary()
# parse the return
data = await safe_parse_api_data(summary, 'SUMMARY', 0, "Power")
data = summary['SUMMARY'][0]["Power"]
print(data)
if data:
print(data)
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(get_miner_power_data(str("192.168.1.69")))
asyncio.new_event_loop().run_until_complete(
get_miner_power_data(str("192.168.1.69")))
```
* Multicommands:
@@ -272,8 +231,8 @@ How about we get the current pool user and hashrate in 1 command?
```python
import asyncio
import ipaddress
from miners.miner_factory import MinerFactory
from tools.cfg_util.func.parse_data import safe_parse_api_data
from pyasic.miners.miner_factory import MinerFactory
from tools.cfg_util_old.func.parse_data import safe_parse_api_data
async def get_miner_hashrate_and_pool(ip: str):
@@ -286,15 +245,14 @@ async def get_miner_hashrate_and_pool(ip: str):
# Get the API data
api_data = await miner.api.multicommand("pools", "summary")
if "pools" in api_data.keys():
user = await safe_parse_api_data(api_data, "pools", 0, "POOLS", 0, "User")
user = api_data["pools"][0]["POOLS"][0]["User"]
print(user)
if "summary" in api_data.keys():
hashrate = await safe_parse_api_data(api_data, "summary", 0, "SUMMARY", 0, "MHS av")
hashrate = api_data["summary"][0]["SUMMARY"][0]["MHS av"]
print(hashrate)
if __name__ == '__main__':
asyncio.new_event_loop().run_until_complete(get_miner_hashrate_and_pool(str("192.168.1.9")))
asyncio.new_event_loop().run_until_complete(
get_miner_hashrate_and_pool(str("192.168.1.9")))
```

View File

@@ -1,4 +0,0 @@
from tools.bad_board_util import main
if __name__ == "__main__":
main()

View File

@@ -1,78 +0,0 @@
"""
SAMPLE CONFIG
-------------------
{
"format": {
"version": "1.2+", # -> (default = "1.2+", str, (bos: format.version))
"model": "Antminer S9", # -> (default = "Antminer S9", str, (bos: format.model))
"generator": "upstream_config_util", # -> (hidden, always = "upstream_config_util", str, (bos: format.generator))
"timestamp": 1606842000, # -> (hidden, always = int(time.time()) (current unix time), int, (bos: format.timestamp))
},
"temperature": {
"mode": "auto", # -> (default = "auto", str["auto", "manual", "disabled"], (bos: temp_control.mode))
"target": 70.0, # -> (default = 70.0, float, (bos: temp_control.target_temp))
"hot": 80.0, # -> (default = 80.0, float, (bos: temp_control.hot_temp))
"danger": 90.0, # -> (default = 90.0, float, (bos: temp_control.dangerous_temp))
},
"fans": { # -> (optional, required if temperature["mode"] == "disabled", (bos: fan_control))
"min_fans": 1, # -> (default = 1, int, (bos: fan_control.min_fans))
"speed": 100, # -> (default = 100, 0 < int < 100, (bos: fan_control.speed))
},
"asicboost": True, # -> (default = True, bool, (bos : hash_chain_global.asic_boost))
"pool_groups": [
{
"group_name": "Upstream", # -> (default = "group_{index}" (group_0), str, (bos: group.[index].name))
"quota": 1, # -> (default = 1, int, (bos: group.[index].quota))
"pools": [
{
"url": "stratum+tcp://stratum.slushpool.com:3333", # -> (str, (bos: group.[index].pool.[index].url))
"username": "UpstreamDataInc.test", # -> (str, (bos: group.[index].pool.[index].user))
"password": "123", # -> (str, (bos: group.[index].pool.[index].password))
},
{
"url": "stratum+tcp://us-east.stratum.slushpool.com:3333", # -> (str, (bos: group.[index].pool.[index].url))
"username": "UpstreamDataInc.test", # -> (str, (bos: group.[index].pool.[index].user))
"password": "123", # -> (str, (bos: group.[index].pool.[index].password))
},
{
"url": "stratum+tcp://ca.stratum.slushpool.com:3333", # -> (str, (bos: group.[index].pool.[index].url))
"username": "UpstreamDataInc.test", # -> (str, (bos: group.[index].pool.[index].user))
"password": "123", # -> (str, (bos: group.[index].pool.[index].password))
},
]
},
{
"group_name": "Upstream2", # -> (default = "group_{index}" (group_1), str, (bos: group.[index].name))
"quota": 4, # -> (default = 1, int, (bos: group.[index].quota))
"pools": [
{
"url": "stratum+tcp://stratum.slushpool.com:3333", # -> (str, (bos: group.[index].pool.[index].url))
"username": "UpstreamDataTesting.test", # -> (str, (bos: group.[index].pool.[index].user))
"password": "123", # -> (str, (bos: group.[index].pool.[index].password))
},
{
"url": "stratum+tcp://us-east.stratum.slushpool.com:3333", # -> (str, (bos: group.[index].pool.[index].url))
"username": "UpstreamDataTesting.test", # -> (str, (bos: group.[index].pool.[index].user))
"password": "123", # -> (str, (bos: group.[index].pool.[index].password))
},
{
"url": "stratum+tcp://ca.stratum.slushpool.com:3333", # -> (str, (bos: group.[index].pool.[index].url))
"username": "UpstreamDataTesting.test", # -> (str, (bos: group.[index].pool.[index].user))
"password": "123", # -> (str, (bos: group.[index].pool.[index].password))
},
]
},
],
"autotuning": {
"enabled": True, # -> (default = True, bool), (bos: autotuning.enabled)
"wattage": 900, # -> (default = 900, int, (bos: autotuning.psu_power_limit))
},
"power_scaling": {
"enabled": False, # -> (default = False, bool, (bos: power_scaling.enabled))
"power_step": 100, # -> (default = 100, int, (bos: power_scaling.power_step))
"min_psu_power_limit": 800, # -> (default = 800, int, (bos: power_scaling.min_psu_power_limit))
"shutdown_enabled": True, # -> (default = False, bool, (bos: power_scaling.shutdown_enabled))
"shutdown_duration": 3.0, # -> (default = 3.0, float, (bos: power_scaling.shutdown_duration))
}
}
"""

View File

@@ -1,221 +0,0 @@
import time
import yaml
import toml
async def bos_config_convert(config: dict):
out_config = {}
for opt in config:
if opt == "format":
out_config["format"] = config[opt]
out_config["format"]["generator"] = "upstream_config_util"
out_config["format"]["timestamp"] = int(time.time())
elif opt == "temp_control":
out_config["temperature"] = {}
if "mode" in config[opt].keys():
out_config["temperature"]["mode"] = config[opt]["mode"]
else:
out_config["temperature"]["mode"] = "auto"
if "target_temp" in config[opt].keys():
out_config["temperature"]["target"] = config[opt]["target_temp"]
else:
out_config["temperature"]["target"] = 70.0
if "hot_temp" in config[opt].keys():
out_config["temperature"]["hot"] = config[opt]["hot_temp"]
else:
out_config["temperature"]["hot"] = 80.0
if "dangerous_temp" in config[opt].keys():
out_config["temperature"]["danger"] = config[opt]["dangerous_temp"]
else:
out_config["temperature"]["danger"] = 90.0
elif opt == "fan_control":
out_config["fans"] = {}
if "min_fans" in config[opt].keys():
out_config["fans"]["min_fans"] = config[opt]["min_fans"]
else:
out_config["fans"]["min_fans"] = 1
if "speed" in config[opt].keys():
out_config["fans"]["speed"] = config[opt]["speed"]
else:
out_config["fans"]["speed"] = 100
elif opt == "group":
out_config["pool_groups"] = [{} for _item in range(len(config[opt]))]
for idx in range(len(config[opt])):
out_config["pool_groups"][idx]["pools"] = []
out_config["pool_groups"][idx] = {}
if "name" in config[opt][idx].keys():
out_config["pool_groups"][idx]["group_name"] = config[opt][idx][
"name"
]
else:
out_config["pool_groups"][idx]["group_name"] = f"group_{idx}"
if "quota" in config[opt][idx].keys():
out_config["pool_groups"][idx]["quota"] = config[opt][idx]["quota"]
else:
out_config["pool_groups"][idx]["quota"] = 1
out_config["pool_groups"][idx]["pools"] = [
{} for _item in range(len(config[opt][idx]["pool"]))
]
for pool_idx in range(len(config[opt][idx]["pool"])):
out_config["pool_groups"][idx]["pools"][pool_idx]["url"] = config[
opt
][idx]["pool"][pool_idx]["url"]
out_config["pool_groups"][idx]["pools"][pool_idx][
"username"
] = config[opt][idx]["pool"][pool_idx]["user"]
out_config["pool_groups"][idx]["pools"][pool_idx][
"password"
] = config[opt][idx]["pool"][pool_idx]["password"]
elif opt == "autotuning":
out_config["autotuning"] = {}
if "enabled" in config[opt].keys():
out_config["autotuning"]["enabled"] = config[opt]["enabled"]
else:
out_config["autotuning"]["enabled"] = True
if "psu_power_limit" in config[opt].keys():
out_config["autotuning"]["wattage"] = config[opt]["psu_power_limit"]
else:
out_config["autotuning"]["wattage"] = 900
elif opt == "power_scaling":
out_config["power_scaling"] = {}
if "enabled" in config[opt].keys():
out_config["power_scaling"]["enabled"] = config[opt]["enabled"]
else:
out_config["power_scaling"]["enabled"] = False
if "power_step" in config[opt].keys():
out_config["power_scaling"]["power_step"] = config[opt]["power_step"]
else:
out_config["power_scaling"]["power_step"] = 100
if "min_psu_power_limit" in config[opt].keys():
out_config["power_scaling"]["min_psu_power_limit"] = config[opt][
"min_psu_power_limit"
]
else:
out_config["power_scaling"]["min_psu_power_limit"] = 800
if "shutdown_enabled" in config[opt].keys():
out_config["power_scaling"]["shutdown_enabled"] = config[opt][
"shutdown_enabled"
]
else:
out_config["power_scaling"]["shutdown_enabled"] = False
if "shutdown_duration" in config[opt].keys():
out_config["power_scaling"]["shutdown_duration"] = config[opt][
"shutdown_duration"
]
else:
out_config["power_scaling"]["shutdown_duration"] = 3.0
return yaml.dump(out_config, sort_keys=False)
async def general_config_convert_bos(yaml_config, user_suffix: str = None):
config = yaml.load(yaml_config, Loader=yaml.SafeLoader)
out_config = {}
for opt in config:
if opt == "format":
out_config["format"] = config[opt]
out_config["format"]["generator"] = "upstream_config_util"
out_config["format"]["timestamp"] = int(time.time())
elif opt == "temperature":
out_config["temp_control"] = {}
if "mode" in config[opt].keys():
out_config["temp_control"]["mode"] = config[opt]["mode"]
else:
out_config["temp_control"]["mode"] = "auto"
if "target" in config[opt].keys():
out_config["temp_control"]["target_temp"] = config[opt]["target"]
else:
out_config["temp_control"]["target_temp"] = 70.0
if "hot" in config[opt].keys():
out_config["temp_control"]["hot_temp"] = config[opt]["hot"]
else:
out_config["temp_control"]["hot_temp"] = 80.0
if "danger" in config[opt].keys():
out_config["temp_control"]["dangerous_temp"] = config[opt]["danger"]
else:
out_config["temp_control"]["dangerous_temp"] = 90.0
elif opt == "fans":
out_config["fan_control"] = {}
if "min_fans" in config[opt].keys():
out_config["fan_control"]["min_fans"] = config[opt]["min_fans"]
else:
out_config["fan_control"]["min_fans"] = 1
if "speed" in config[opt].keys():
out_config["fan_control"]["speed"] = config[opt]["speed"]
else:
out_config["fan_control"]["speed"] = 100
elif opt == "pool_groups":
out_config["group"] = [{} for _item in range(len(config[opt]))]
for idx in range(len(config[opt])):
out_config["group"][idx]["pools"] = []
out_config["group"][idx] = {}
if "group_name" in config[opt][idx].keys():
out_config["group"][idx]["name"] = config[opt][idx]["group_name"]
else:
out_config["group"][idx]["name"] = f"group_{idx}"
if "quota" in config[opt][idx].keys():
out_config["group"][idx]["quota"] = config[opt][idx]["quota"]
else:
out_config["group"][idx]["quota"] = 1
out_config["group"][idx]["pool"] = [
{} for _item in range(len(config[opt][idx]["pools"]))
]
for pool_idx in range(len(config[opt][idx]["pools"])):
out_config["group"][idx]["pool"][pool_idx]["url"] = config[opt][
idx
]["pools"][pool_idx]["url"]
username = config[opt][idx]["pools"][pool_idx]["username"]
if user_suffix:
if "." in username:
username = f"{username}x{user_suffix}"
else:
username = f"{username}.{user_suffix}"
out_config["group"][idx]["pool"][pool_idx]["user"] = username
out_config["group"][idx]["pool"][pool_idx]["password"] = config[
opt
][idx]["pools"][pool_idx]["password"]
elif opt == "autotuning":
out_config["autotuning"] = {}
if "enabled" in config[opt].keys():
out_config["autotuning"]["enabled"] = config[opt]["enabled"]
else:
out_config["autotuning"]["enabled"] = True
if "wattage" in config[opt].keys():
out_config["autotuning"]["psu_power_limit"] = config[opt]["wattage"]
else:
out_config["autotuning"]["psu_power_limit"] = 900
elif opt == "power_scaling":
out_config["power_scaling"] = {}
if "enabled" in config[opt].keys():
out_config["power_scaling"]["enabled"] = config[opt]["enabled"]
else:
out_config["power_scaling"]["enabled"] = False
if "power_step" in config[opt].keys():
out_config["power_scaling"]["power_step"] = config[opt]["power_step"]
else:
out_config["power_scaling"]["power_step"] = 100
if "min_psu_power_limit" in config[opt].keys():
out_config["power_scaling"]["min_psu_power_limit"] = config[opt][
"min_psu_power_limit"
]
else:
out_config["power_scaling"]["min_psu_power_limit"] = 800
if "shutdown_enabled" in config[opt].keys():
out_config["power_scaling"]["shutdown_enabled"] = config[opt][
"shutdown_enabled"
]
else:
out_config["power_scaling"]["shutdown_enabled"] = False
if "shutdown_duration" in config[opt].keys():
out_config["power_scaling"]["shutdown_duration"] = config[opt][
"shutdown_duration"
]
else:
out_config["power_scaling"]["shutdown_duration"] = 3.0
return out_config

View File

@@ -1,17 +0,0 @@
config cgminer 'default'
option pool1pw 'x'
option pool2pw 'x'
option pool3pw 'x'
option voltage_level_offset '0'
option fan '10'
option api_allow 'W:0/0'
option power_mode 'balance'
option pool1url 'stratum+tcp://ca.stratum.slushpool.com:3333'
option pool1user 'poolacct.worker1'
option pool2url 'stratum+tcp://ca.stratum.slushpool.com:3333'
option pool2user 'poolacct.worker2'
option pool3url 'stratum+tcp://ca.stratum.slushpool.com:3333'
option pool3user 'poolacct.worker3'
option ntp_enable 'openwrt'

View File

@@ -1,4 +0,0 @@
from tools.cfg_util import main
if __name__ == "__main__":
main()

BIN
icon.ico

Binary file not shown.

Before

Width:  |  Height:  |  Size: 116 KiB

View File

@@ -1,18 +0,0 @@
import logging
from settings import DEBUG
logging.basicConfig(
# filename="logfile.txt",
# filemode="a",
format="[%(levelname)s][%(asctime)s](%(name)s) - %(message)s",
datefmt="%x %X",
)
logger = logging.getLogger()
if DEBUG:
logger.setLevel(logging.DEBUG)
logging.getLogger("asyncssh").setLevel(logging.DEBUG)
else:
logger.setLevel(logging.INFO)
logging.getLogger("asyncssh").setLevel(logging.WARNING)

View File

@@ -1,44 +0,0 @@
"""
Make a build of the board tool.
Usage: make_board_tool_exe.py build
The build will show up in the build directory.
"""
import datetime
import sys
import os
from cx_Freeze import setup, Executable
base = None
if sys.platform == "win32":
base = "Win32GUI"
version = datetime.datetime.now()
version = version.strftime("%y.%m.%d")
print(version)
setup(
name="UpstreamBoardUtil.exe",
version=version,
description="Upstream Data Board Utility Build",
options={
"build_exe": {
"build_exe": f"{os.getcwd()}\\build\\board_util\\UpstreamBoardUtil-{version}-{sys.platform}\\",
"include_files": [
os.path.join(os.getcwd(), "settings/settings.toml"),
],
"include_msvcr": True,
"add_to_path": True,
},
},
executables=[
Executable(
"board_util.py",
base=base,
icon="icon.ico",
target_name="UpstreamBoardUtil.exe",
)
],
)

View File

@@ -1,43 +0,0 @@
"""
Make a build of the config tool.
Usage: make_config_tool.py build
The build will show up in the build directory.
"""
import datetime
import sys
import os
from cx_Freeze import setup, Executable
base = None
if sys.platform == "win32":
base = "Win32GUI"
version = datetime.datetime.now()
version = version.strftime("%y.%m.%d")
print(version)
setup(
name="UpstreamCFGUtil.exe",
version=version,
description="Upstream Data Config Utility Build",
options={
"build_exe": {
"build_exe": f"{os.getcwd()}\\build\\UpstreamCFGUtil-{version}-{sys.platform}\\",
"include_files": [
os.path.join(os.getcwd(), "settings/settings.toml"),
os.path.join(os.getcwd(), "static/CFG-Util-README.md"),
],
},
},
executables=[
Executable(
"config_tool.py",
base=base,
icon="icon.ico",
target_name="UpstreamCFGUtil.exe",
)
],
)

View File

@@ -1,11 +0,0 @@
from miners.bmminer import BMMiner
class BMMinerS9(BMMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "S9"
self.api_type = "BMMiner"
def __repr__(self) -> str:
return f"BMMinerS9: {str(self.ip)}"

View File

@@ -1,15 +0,0 @@
import logging
import toml
from miners.bosminer import BOSMiner
from config.bos import general_config_convert_bos
class BOSMinerS9(BOSMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "S9"
self.api_type = "BOSMiner"
def __repr__(self) -> str:
return f"BOSminerS9: {str(self.ip)}"

View File

@@ -1,11 +0,0 @@
from miners.cgminer import CGMiner
class CGMinerS9(CGMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "S9"
self.api_type = "CGMiner"
def __repr__(self) -> str:
return f"CGMinerS9: {str(self.ip)}"

View File

@@ -1,11 +0,0 @@
from miners.bmminer import BMMiner
class BMMinerT9(BMMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "T9"
self.api_type = "BMMiner"
def __repr__(self) -> str:
return f"BMMinerT9: {str(self.ip)}"

View File

@@ -1,11 +0,0 @@
from miners.cgminer import CGMiner
class CGMinerT9(CGMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "T9"
self.api_type = "CGMiner"
def __repr__(self) -> str:
return f"CGMinerT9: {str(self.ip)}"

View File

@@ -1,9 +0,0 @@
from miners.bmminer import BMMiner
class BMMinerX17(BMMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
def __repr__(self) -> str:
return f"BMMinerX17: {str(self.ip)}"

View File

@@ -1,11 +0,0 @@
from miners.bosminer import BOSMiner
class BOSMinerX17(BOSMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.api_type = "BOSMiner"
self.nominal_chips = 65
def __repr__(self) -> str:
return f"BOSminerX17: {str(self.ip)}"

View File

@@ -1,10 +0,0 @@
from miners.cgminer import CGMiner
class CGMinerX17(CGMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.api_type = "CGMiner"
def __repr__(self) -> str:
return f"CGMinerX17: {str(self.ip)}"

View File

@@ -1,22 +0,0 @@
from miners.bmminer import BMMiner
import logging
class BMMinerX19(BMMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
def __repr__(self) -> str:
return f"BMMinerX19: {str(self.ip)}"
async def get_model(self):
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
version_data = await self.api.version()
if version_data:
self.model = version_data["VERSION"][0]["Type"].replace("Antminer ", "")
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
logging.warning(f"Failed to get model for miner: {self}")
return None

View File

@@ -1,11 +0,0 @@
from miners.bosminer import BOSMiner
class BOSMinerX19(BOSMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.api_type = "BOSMiner"
self.nominal_chips = 114
def __repr__(self) -> str:
return f"BOSminerX19: {str(self.ip)}"

View File

@@ -1,23 +0,0 @@
from miners.cgminer import CGMiner
import logging
class CGMinerX19(CGMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.api_type = "CGMiner"
def __repr__(self) -> str:
return f"CGMinerX19: {str(self.ip)}"
async def get_model(self):
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
version_data = await self.api.version()
if version_data:
self.model = version_data["VERSION"][0]["Type"].replace("Antminer ", "")
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
logging.warning(f"Failed to get model for miner: {self}")
return None

View File

@@ -1,24 +0,0 @@
from miners.cgminer import CGMiner
import logging
class CGMinerAvalon10(CGMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "Avalon 10"
self.api_type = "CGMiner"
async def get_hostname(self):
try:
devdetails = await self.api.devdetails()
if devdetails:
if len(devdetails.get("DEVDETAILS")) > 0:
if "Name" in devdetails["DEVDETAILS"][0]:
host = devdetails["DEVDETAILS"][0]["Name"]
logging.debug(f"Found hostname for {self.ip}: {host}")
return host
except Exception as e:
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"

View File

@@ -1,177 +0,0 @@
from miners.cgminer import CGMiner
import re
class CGMinerAvalon8(CGMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "Avalon 8"
self.api_type = "CGMiner"
self.pattern = re.compile(
r"Ver\[(?P<Ver>[-0-9A-Fa-f+]+)\]\s"
"DNA\[(?P<DNA>[0-9A-Fa-f]+)\]\s"
"Elapsed\[(?P<Elapsed>[-0-9]+)\]\s"
"MW\[(?P<MW>[-\s0-9]+)\]\s"
"LW\[(?P<LW>[-0-9]+)\]\s"
"MH\[(?P<MH>[-\s0-9]+)\]\s"
"HW\[(?P<HW>[-0-9]+)\]\s"
"Temp\[(?P<Temp>[0-9]+)\]\s"
"TMax\[(?P<TMax>[0-9]+)\]\s"
"Fan\[(?P<Fan>[0-9]+)\]\s"
"FanR\[(?P<FanR>[0-9]+)%\]\s"
"Vi\[(?P<Vi>[-\s0-9]+)\]\s"
"Vo\[(?P<Vo>[-\s0-9]+)\]\s"
"("
"PLL0\[(?P<PLL0>[-\s0-9]+)\]\s"
"PLL1\[(?P<PLL1>[-\s0-9]+)\]\s"
"PLL2\[(?P<PLL2>[-\s0-9]+)\]\s"
"PLL3\[(?P<PLL3>[-\s0-9]+)\]\s"
")?"
"GHSmm\[(?P<GHSmm>[-.0-9]+)\]\s"
"WU\[(?P<WU>[-.0-9]+)\]\s"
"Freq\[(?P<Freq>[.0-9]+)\]\s"
"PG\[(?P<PG>[0-9]+)\]\s"
"Led\[(?P<LED>0|1)\]\s"
"MW0\[(?P<MW0>[0-9\s]+)\]\s"
"MW1\[(?P<MW1>[0-9\s]+)\]\s"
"MW2\[(?P<MW2>[0-9\s]+)\]\s"
"MW3\[(?P<MW3>[0-9\s]+)\]\s"
"TA\[(?P<TA>[0-9]+)\]\s"
"ECHU\[(?P<ECHU>[0-9\s]+)\]\s"
"ECMM\[(?P<ECMM>[0-9]+)\]\s.*"
"FAC0\[(?P<FAC0>[-0-9]+)\]\s"
"OC\[(?P<OC>[0-9]+)\]\s"
"SF0\[(?P<SF0>[-\s0-9]+)\]\s"
"SF1\[(?P<SF1>[-\s0-9]+)\]\s"
"SF2\[(?P<SF2>[-\s0-9]+)\]\s"
"SF3\[(?P<SF3>[-\s0-9]+)\]\s"
"PMUV\[(?P<PMUV>[-\s\S*]+)\]\s"
"PVT_T0\[(?P<PVT_T0>[-0-9\s]+)\]\s"
"PVT_T1\[(?P<PVT_T1>[-0-9\s]+)\]\s"
"PVT_T2\[(?P<PVT_T2>[-0-9\s]+)\]\s"
"PVT_T3\[(?P<PVT_T3>[-0-9\s]+)\]\s"
"PVT_V0_0\[(?P<PVT_V0_0>[-0-9\s]+)\]\s"
"PVT_V0_1\[(?P<PVT_V0_1>[-0-9\s]+)\]\s"
"PVT_V0_2\[(?P<PVT_V0_2>[-0-9\s]+)\]\s"
"PVT_V0_3\[(?P<PVT_V0_3>[-0-9\s]+)\]\s"
"PVT_V0_4\[(?P<PVT_V0_4>[-0-9\s]+)\]\s"
"PVT_V0_5\[(?P<PVT_V0_5>[-0-9\s]+)\]\s"
"PVT_V0_6\[(?P<PVT_V0_6>[-0-9\s]+)\]\s"
"PVT_V0_7\[(?P<PVT_V0_7>[-0-9\s]+)\]\s"
"PVT_V0_8\[(?P<PVT_V0_8>[-0-9\s]+)\]\s"
"PVT_V0_9\[(?P<PVT_V0_9>[-0-9\s]+)\]\s"
"PVT_V0_10\[(?P<PVT_V0_10>[-0-9\s]+)\]\s"
"PVT_V0_11\[(?P<PVT_V0_11>[-0-9\s]+)\]\s"
"PVT_V0_12\[(?P<PVT_V0_12>[-0-9\s]+)\]\s"
"PVT_V0_13\[(?P<PVT_V0_13>[-0-9\s]+)\]\s"
"PVT_V0_14\[(?P<PVT_V0_14>[-0-9\s]+)\]\s"
"PVT_V0_15\[(?P<PVT_V0_15>[-0-9\s]+)\]\s"
"PVT_V0_16\[(?P<PVT_V0_16>[-0-9\s]+)\]\s"
"PVT_V0_17\[(?P<PVT_V0_17>[-0-9\s]+)\]\s"
"PVT_V0_18\[(?P<PVT_V0_18>[-0-9\s]+)\]\s"
"PVT_V0_19\[(?P<PVT_V0_19>[-0-9\s]+)\]\s"
"PVT_V0_20\[(?P<PVT_V0_20>[-0-9\s]+)\]\s"
"PVT_V0_21\[(?P<PVT_V0_21>[-0-9\s]+)\]\s"
"PVT_V0_22\[(?P<PVT_V0_22>[-0-9\s]+)\]\s"
"PVT_V0_23\[(?P<PVT_V0_23>[-0-9\s]+)\]\s"
"PVT_V0_24\[(?P<PVT_V0_24>[-0-9\s]+)\]\s"
"PVT_V0_25\[(?P<PVT_V0_25>[-0-9\s]+)\]\s"
"PVT_V1_0\[(?P<PVT_V1_0>[-0-9\s]+)\]\s"
"PVT_V1_1\[(?P<PVT_V1_1>[-0-9\s]+)\]\s"
"PVT_V1_2\[(?P<PVT_V1_2>[-0-9\s]+)\]\s"
"PVT_V1_3\[(?P<PVT_V1_3>[-0-9\s]+)\]\s"
"PVT_V1_4\[(?P<PVT_V1_4>[-0-9\s]+)\]\s"
"PVT_V1_5\[(?P<PVT_V1_5>[-0-9\s]+)\]\s"
"PVT_V1_6\[(?P<PVT_V1_6>[-0-9\s]+)\]\s"
"PVT_V1_7\[(?P<PVT_V1_7>[-0-9\s]+)\]\s"
"PVT_V1_8\[(?P<PVT_V1_8>[-0-9\s]+)\]\s"
"PVT_V1_9\[(?P<PVT_V1_9>[-0-9\s]+)\]\s"
"PVT_V1_10\[(?P<PVT_V1_10>[-0-9\s]+)\]\s"
"PVT_V1_11\[(?P<PVT_V1_11>[-0-9\s]+)\]\s"
"PVT_V1_12\[(?P<PVT_V1_12>[-0-9\s]+)\]\s"
"PVT_V1_13\[(?P<PVT_V1_13>[-0-9\s]+)\]\s"
"PVT_V1_14\[(?P<PVT_V1_14>[-0-9\s]+)\]\s"
"PVT_V1_15\[(?P<PVT_V1_15>[-0-9\s]+)\]\s"
"PVT_V1_16\[(?P<PVT_V1_16>[-0-9\s]+)\]\s"
"PVT_V1_17\[(?P<PVT_V1_17>[-0-9\s]+)\]\s"
"PVT_V1_18\[(?P<PVT_V1_18>[-0-9\s]+)\]\s"
"PVT_V1_19\[(?P<PVT_V1_19>[-0-9\s]+)\]\s"
"PVT_V1_20\[(?P<PVT_V1_20>[-0-9\s]+)\]\s"
"PVT_V1_21\[(?P<PVT_V1_21>[-0-9\s]+)\]\s"
"PVT_V1_22\[(?P<PVT_V1_22>[-0-9\s]+)\]\s"
"PVT_V1_23\[(?P<PVT_V1_23>[-0-9\s]+)\]\s"
"PVT_V1_24\[(?P<PVT_V1_24>[-0-9\s]+)\]\s"
"PVT_V1_25\[(?P<PVT_V1_25>[-0-9\s]+)\]\s"
"PVT_V2_0\[(?P<PVT_V2_0>[-0-9\s]+)\]\s"
"PVT_V2_1\[(?P<PVT_V2_1>[-0-9\s]+)\]\s"
"PVT_V2_2\[(?P<PVT_V2_2>[-0-9\s]+)\]\s"
"PVT_V2_3\[(?P<PVT_V2_3>[-0-9\s]+)\]\s"
"PVT_V2_4\[(?P<PVT_V2_4>[-0-9\s]+)\]\s"
"PVT_V2_5\[(?P<PVT_V2_5>[-0-9\s]+)\]\s"
"PVT_V2_6\[(?P<PVT_V2_6>[-0-9\s]+)\]\s"
"PVT_V2_7\[(?P<PVT_V2_7>[-0-9\s]+)\]\s"
"PVT_V2_8\[(?P<PVT_V2_8>[-0-9\s]+)\]\s"
"PVT_V2_9\[(?P<PVT_V2_9>[-0-9\s]+)\]\s"
"PVT_V2_10\[(?P<PVT_V2_10>[-0-9\s]+)\]\s"
"PVT_V2_11\[(?P<PVT_V2_11>[-0-9\s]+)\]\s"
"PVT_V2_12\[(?P<PVT_V2_12>[-0-9\s]+)\]\s"
"PVT_V2_13\[(?P<PVT_V2_13>[-0-9\s]+)\]\s"
"PVT_V2_14\[(?P<PVT_V2_14>[-0-9\s]+)\]\s"
"PVT_V2_15\[(?P<PVT_V2_15>[-0-9\s]+)\]\s"
"PVT_V2_16\[(?P<PVT_V2_16>[-0-9\s]+)\]\s"
"PVT_V2_17\[(?P<PVT_V2_17>[-0-9\s]+)\]\s"
"PVT_V2_18\[(?P<PVT_V2_18>[-0-9\s]+)\]\s"
"PVT_V2_19\[(?P<PVT_V2_19>[-0-9\s]+)\]\s"
"PVT_V2_20\[(?P<PVT_V2_20>[-0-9\s]+)\]\s"
"PVT_V2_21\[(?P<PVT_V2_21>[-0-9\s]+)\]\s"
"PVT_V2_22\[(?P<PVT_V2_22>[-0-9\s]+)\]\s"
"PVT_V2_23\[(?P<PVT_V2_23>[-0-9\s]+)\]\s"
"PVT_V2_24\[(?P<PVT_V2_24>[-0-9\s]+)\]\s"
"PVT_V2_25\[(?P<PVT_V2_25>[-0-9\s]+)\]\s"
"PVT_V3_0\[(?P<PVT_V3_0>[-0-9\s]+)\]\s"
"PVT_V3_1\[(?P<PVT_V3_1>[-0-9\s]+)\]\s"
"PVT_V3_2\[(?P<PVT_V3_2>[-0-9\s]+)\]\s"
"PVT_V3_3\[(?P<PVT_V3_3>[-0-9\s]+)\]\s"
"PVT_V3_4\[(?P<PVT_V3_4>[-0-9\s]+)\]\s"
"PVT_V3_5\[(?P<PVT_V3_5>[-0-9\s]+)\]\s"
"PVT_V3_6\[(?P<PVT_V3_6>[-0-9\s]+)\]\s"
"PVT_V3_7\[(?P<PVT_V3_7>[-0-9\s]+)\]\s"
"PVT_V3_8\[(?P<PVT_V3_8>[-0-9\s]+)\]\s"
"PVT_V3_9\[(?P<PVT_V3_9>[-0-9\s]+)\]\s"
"PVT_V3_10\[(?P<PVT_V3_10>[-0-9\s]+)\]\s"
"PVT_V3_11\[(?P<PVT_V3_11>[-0-9\s]+)\]\s"
"PVT_V3_12\[(?P<PVT_V3_12>[-0-9\s]+)\]\s"
"PVT_V3_13\[(?P<PVT_V3_13>[-0-9\s]+)\]\s"
"PVT_V3_14\[(?P<PVT_V3_14>[-0-9\s]+)\]\s"
"PVT_V3_15\[(?P<PVT_V3_15>[-0-9\s]+)\]\s"
"PVT_V3_16\[(?P<PVT_V3_16>[-0-9\s]+)\]\s"
"PVT_V3_17\[(?P<PVT_V3_17>[-0-9\s]+)\]\s"
"PVT_V3_18\[(?P<PVT_V3_18>[-0-9\s]+)\]\s"
"PVT_V3_19\[(?P<PVT_V3_19>[-0-9\s]+)\]\s"
"PVT_V3_20\[(?P<PVT_V3_20>[-0-9\s]+)\]\s"
"PVT_V3_21\[(?P<PVT_V3_21>[-0-9\s]+)\]\s"
"PVT_V3_22\[(?P<PVT_V3_22>[-0-9\s]+)\]\s"
"PVT_V3_23\[(?P<PVT_V3_23>[-0-9\s]+)\]\s"
"PVT_V3_24\[(?P<PVT_V3_24>[-0-9\s]+)\]\s"
"PVT_V3_25\[(?P<PVT_V3_25>[-0-9\s]+)\]\s"
"FM\[(?P<FM>[0-9]+)\]\s"
"CRC\[(?P<CRC>[0-9\s]+)\]",
re.X,
)
def __repr__(self) -> str:
return f"CGMinerAvalon8: {str(self.ip)}"
def parse_estats(self, estats):
for estat in estats:
for key in estat:
if key[:5] == "MM ID":
self._parse_estat(estat, key)
def _parse_estat(self, estat, key):
module = estat[key]
module_info = re.match(self.pattern, module)
if not module_info:
return None
module_info = module_info.groupdict()
print(module_info)

View File

@@ -1,11 +0,0 @@
from miners.cgminer import CGMiner
class CGMinerAvalon(CGMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "Avalon"
self.api_type = "CGMiner"
def __repr__(self) -> str:
return f"CGMinerAvalon: {str(self.ip)}"

View File

@@ -1,116 +0,0 @@
from API.bmminer import BMMinerAPI
from miners import BaseMiner
import logging
class BMMiner(BaseMiner):
def __init__(self, ip: str) -> None:
api = BMMinerAPI(ip)
super().__init__(ip, api)
self.model = None
self.config = None
self.uname = "root"
self.pwd = "admin"
def __repr__(self) -> str:
return f"BMMiner: {str(self.ip)}"
async def get_model(self) -> str or None:
"""Get miner model.
:return: Miner model or None.
"""
# check if model is cached
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
# get devdetails data
version_data = await self.api.devdetails()
# if we get data back, parse it for model
if version_data:
# handle Antminer BMMiner as a base
self.model = version_data["DEVDETAILS"][0]["Model"].replace("Antminer ", "")
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
# if we don't get devdetails, log a failed attempt
logging.warning(f"Failed to get model for miner: {self}")
return None
async def get_hostname(self) -> str:
"""Get miner hostname.
:return: The hostname of the miner as a string or "?"
"""
try:
# open an ssh connection
async with (await self._get_ssh_connection()) as conn:
# if we get the connection, check hostname
if conn is not None:
# get output of the hostname file
data = await conn.run("cat /proc/sys/kernel/hostname")
host = data.stdout.strip()
# return hostname data
logging.debug(f"Found hostname for {self.ip}: {host}")
return host
else:
# return ? if we fail to get hostname with no ssh connection
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
except Exception as e:
# return ? if we fail to get hostname with an exception
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
async def send_ssh_command(self, cmd: str) -> str or None:
"""Send a command to the miner over ssh.
:param cmd: The command to run.
:return: Result of the command or None.
"""
result = None
# open an ssh connection
async with (await self._get_ssh_connection()) as conn:
# 3 retries
for i in range(3):
try:
# run the command and get the result
result = await conn.run(cmd)
except Exception as e:
# if the command fails, log it
logging.warning(f"{self} command {cmd} error: {e}")
# on the 3rd retry, return None
if i == 3:
return
continue
# return the result, either command output or None
return result
async def get_config(self) -> list or None:
"""Get the pool configuration of the miner.
:return: Pool config data or None.
"""
# get pool data
pools = await self.api.pools()
pool_data = []
# ensure we got pool data
if not pools:
return
# parse all the pools
for pool in pools["POOLS"]:
pool_data.append({"url": pool["URL"], "user": pool["User"], "pwd": "123"})
return pool_data
async def reboot(self) -> None:
logging.debug(f"{self}: Sending reboot command.")
await self.send_ssh_command("reboot")
logging.debug(f"{self}: Reboot command completed.")

View File

@@ -1,224 +0,0 @@
from miners import BaseMiner
from API.bosminer import BOSMinerAPI
import toml
from config.bos import bos_config_convert, general_config_convert_bos
import logging
class BOSMiner(BaseMiner):
def __init__(self, ip: str) -> None:
api = BOSMinerAPI(ip)
super().__init__(ip, api)
self.model = None
self.config = None
self.version = None
self.uname = "root"
self.pwd = "admin"
self.nominal_chips = 63
def __repr__(self) -> str:
return f"BOSminer: {str(self.ip)}"
async def send_ssh_command(self, cmd: str) -> str or None:
"""Send a command to the miner over ssh.
:return: Result of the command or None.
"""
result = None
# open an ssh connection
async with (await self._get_ssh_connection()) as conn:
# 3 retries
for i in range(3):
try:
# run the command and get the result
result = await conn.run(cmd)
except Exception as e:
# if the command fails, log it
logging.warning(f"{self} command {cmd} error: {e}")
# on the 3rd retry, return None
if i == 3:
return
continue
# return the result, either command output or None
return result
async def fault_light_on(self) -> None:
"""Sends command to turn on fault light on the miner."""
logging.debug(f"{self}: Sending fault_light on command.")
self.light = True
await self.send_ssh_command("miner fault_light on")
logging.debug(f"{self}: fault_light on command completed.")
async def fault_light_off(self) -> None:
"""Sends command to turn off fault light on the miner."""
logging.debug(f"{self}: Sending fault_light off command.")
self.light = False
await self.send_ssh_command("miner fault_light off")
logging.debug(f"{self}: fault_light off command completed.")
async def restart_backend(self) -> None:
await self.restart_bosminer()
async def restart_bosminer(self) -> None:
"""Restart bosminer hashing process."""
logging.debug(f"{self}: Sending bosminer restart command.")
await self.send_ssh_command("/etc/init.d/bosminer restart")
logging.debug(f"{self}: bosminer restart command completed.")
async def reboot(self) -> None:
"""Reboots power to the physical miner."""
logging.debug(f"{self}: Sending reboot command.")
await self.send_ssh_command("/sbin/reboot")
logging.debug(f"{self}: Reboot command completed.")
async def get_config(self) -> None:
logging.debug(f"{self}: Getting config.")
async with (await self._get_ssh_connection()) as conn:
logging.debug(f"{self}: Opening SFTP connection.")
async with conn.start_sftp_client() as sftp:
logging.debug(f"{self}: Reading config file.")
async with sftp.open("/etc/bosminer.toml") as file:
toml_data = toml.loads(await file.read())
logging.debug(f"{self}: Converting config file.")
cfg = await bos_config_convert(toml_data)
self.config = cfg
async def get_hostname(self) -> str:
"""Get miner hostname.
:return: The hostname of the miner as a string or "?"
"""
try:
async with (await self._get_ssh_connection()) as conn:
if conn is not None:
data = await conn.run("cat /proc/sys/kernel/hostname")
host = data.stdout.strip()
logging.debug(f"Found hostname for {self.ip}: {host}")
return host
else:
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
except Exception as e:
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
async def get_model(self) -> str or None:
"""Get miner model.
:return: Miner model or None.
"""
# check if model is cached
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model} (BOS)")
return self.model + " (BOS)"
# get devdetails data
version_data = await self.api.devdetails()
# if we get data back, parse it for model
if version_data:
if not version_data["DEVDETAILS"] == []:
# handle Antminer BOSMiner as a base
self.model = version_data["DEVDETAILS"][0]["Model"].replace(
"Antminer ", ""
)
logging.debug(f"Found model for {self.ip}: {self.model} (BOS)")
return self.model + " (BOS)"
# if we don't get devdetails, log a failed attempt
logging.warning(f"Failed to get model for miner: {self}")
return None
async def get_version(self):
"""Get miner firmware version.
:return: Miner firmware version or None.
"""
# check if version is cached
if self.version:
logging.debug(f"Found version for {self.ip}: {self.version}")
return self.version
# get output of bos version file
version_data = await self.send_ssh_command("cat /etc/bos_version")
# if we get the version data, parse it
if version_data:
self.version = version_data.stdout.split("-")[5]
logging.debug(f"Found version for {self.ip}: {self.version}")
return self.version
# if we fail to get version, log a failed attempt
logging.warning(f"Failed to get model for miner: {self}")
return None
async def send_config(self, yaml_config, ip_user: bool = False) -> None:
"""Configures miner with yaml config."""
logging.debug(f"{self}: Sending config.")
if ip_user:
suffix = str(self.ip).split(".")[-1]
toml_conf = toml.dumps(
await general_config_convert_bos(yaml_config, user_suffix=suffix)
)
else:
toml_conf = toml.dumps(await general_config_convert_bos(yaml_config))
async with (await self._get_ssh_connection()) as conn:
logging.debug(f"{self}: Opening SFTP connection.")
async with conn.start_sftp_client() as sftp:
logging.debug(f"{self}: Opening config file.")
async with sftp.open("/etc/bosminer.toml", "w+") as file:
await file.write(toml_conf)
logging.debug(f"{self}: Restarting BOSMiner")
await conn.run("/etc/init.d/bosminer restart")
async def get_board_info(self) -> dict:
"""Gets data on each board and chain in the miner."""
logging.debug(f"{self}: Getting board info.")
devdetails = await self.api.devdetails()
if not devdetails.get("DEVDETAILS"):
print("devdetails error", devdetails)
return {0: [], 1: [], 2: []}
devs = devdetails["DEVDETAILS"]
boards = {}
offset = devs[0]["ID"]
for board in devs:
boards[board["ID"] - offset] = []
if not board["Chips"] == self.nominal_chips:
nominal = False
else:
nominal = True
boards[board["ID"] - offset].append(
{
"chain": board["ID"] - offset,
"chip_count": board["Chips"],
"chip_status": "o" * board["Chips"],
"nominal": nominal,
}
)
logging.debug(f"Found board data for {self}: {boards}")
return boards
async def get_bad_boards(self) -> dict:
"""Checks for and provides list of non working boards."""
boards = await self.get_board_info()
bad_boards = {}
for board in boards.keys():
for chain in boards[board]:
if not chain["chip_count"] == 63:
if board not in bad_boards.keys():
bad_boards[board] = []
bad_boards[board].append(chain)
return bad_boards
async def check_good_boards(self) -> str:
"""Checks for and provides list for working boards."""
devs = await self.api.devdetails()
bad = 0
chains = devs["DEVDETAILS"]
for chain in chains:
if chain["Chips"] == 0:
bad += 1
if not bad > 0:
return str(self.ip)

View File

@@ -1,72 +0,0 @@
from API.btminer import BTMinerAPI
from miners import BaseMiner
from API import APIError
import logging
class BTMiner(BaseMiner):
def __init__(self, ip: str) -> None:
api = BTMinerAPI(ip)
self.model = None
super().__init__(ip, api)
self.nominal_chips = 66
def __repr__(self) -> str:
return f"BTMiner: {str(self.ip)}"
async def get_model(self):
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
version_data = await self.api.devdetails()
if version_data:
self.model = version_data["DEVDETAILS"][0]["Model"].split("V")[0]
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
logging.warning(f"Failed to get model for miner: {self}")
return None
async def get_hostname(self) -> str:
try:
host_data = await self.api.get_miner_info()
if host_data:
host = host_data["Msg"]["hostname"]
logging.debug(f"Found hostname for {self.ip}: {host}")
return host
except APIError:
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
except Exception as e:
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
async def get_board_info(self) -> dict:
"""Gets data on each board and chain in the miner."""
logging.debug(f"{self}: Getting board info.")
devs = await self.api.devs()
if not devs.get("DEVS"):
print("devs error", devs)
return {0: [], 1: [], 2: []}
devs = devs["DEVS"]
boards = {}
offset = devs[0]["ID"]
for board in devs:
boards[board["ID"] - offset] = []
if "Effective Chips" in board.keys():
if not board["Effective Chips"] in self.nominal_chips:
nominal = False
else:
nominal = True
boards[board["ID"] - offset].append(
{
"chain": board["ID"] - offset,
"chip_count": board["Effective Chips"],
"chip_status": "o" * board["Effective Chips"],
"nominal": nominal,
}
)
else:
logging.warning(f"Incorrect board data from {self}: {board}")
print(board)
logging.debug(f"Found board data for {self}: {boards}")
return boards

View File

@@ -1,90 +0,0 @@
from miners import BaseMiner
from API.cgminer import CGMinerAPI
from API import APIError
class CGMiner(BaseMiner):
def __init__(self, ip: str) -> None:
api = CGMinerAPI(ip)
super().__init__(ip, api)
self.model = None
self.config = None
self.uname = "root"
self.pwd = "admin"
def __repr__(self) -> str:
return f"CGMiner: {str(self.ip)}"
async def get_model(self):
if self.model:
return self.model
try:
version_data = await self.api.devdetails()
except APIError:
return None
if version_data:
self.model = version_data["DEVDETAILS"][0]["Model"].replace("Antminer ", "")
return self.model
return None
async def get_hostname(self) -> str:
try:
async with (await self._get_ssh_connection()) as conn:
if conn is not None:
data = await conn.run("cat /proc/sys/kernel/hostname")
return data.stdout.strip()
else:
return "?"
except Exception:
return "?"
async def send_ssh_command(self, cmd):
result = None
async with (await self._get_ssh_connection()) as conn:
for i in range(3):
try:
result = await conn.run(cmd)
except Exception as e:
print(f"{cmd} error: {e}")
if i == 3:
return
continue
return result
async def restart_backend(self) -> None:
await self.restart_cgminer()
async def restart_cgminer(self) -> None:
commands = ["cgminer-api restart", "/usr/bin/cgminer-monitor >/dev/null 2>&1"]
commands = ";".join(commands)
await self.send_ssh_command(commands)
async def reboot(self) -> None:
await self.send_ssh_command("reboot")
async def start_cgminer(self) -> None:
commands = [
"mkdir -p /etc/tmp/",
'echo "*/3 * * * * /usr/bin/cgminer-monitor" > /etc/tmp/root',
"crontab -u root /etc/tmp/root",
"/usr/bin/cgminer-monitor >/dev/null 2>&1",
]
commands = ";".join(commands)
await self.send_ssh_command(commands)
async def stop_cgminer(self) -> None:
commands = [
"mkdir -p /etc/tmp/",
'echo "" > /etc/tmp/root',
"crontab -u root /etc/tmp/root",
"killall cgminer",
]
commands = ";".join(commands)
await self.send_ssh_command(commands)
async def get_config(self) -> None:
async with (await self._get_ssh_connection()) as conn:
command = "cat /etc/config/cgminer"
result = await conn.run(command, check=True)
self.config = result.stdout
print(str(self.config))

View File

@@ -1,358 +0,0 @@
from miners.antminer.S9.bosminer import BOSMinerS9
from miners.antminer.S9.bmminer import BMMinerS9
from miners.antminer.S9.cgminer import CGMinerS9
from miners.antminer.T9.hive import HiveonT9
from miners.antminer.T9.cgminer import CGMinerT9
from miners.antminer.T9.bmminer import BMMinerT9
from miners.antminer.X17.bosminer import BOSMinerX17
from miners.antminer.X17.bmminer import BMMinerX17
from miners.antminer.X17.cgminer import CGMinerX17
from miners.antminer.X19.bmminer import BMMinerX19
from miners.antminer.X19.cgminer import CGMinerX19
from miners.antminer.X19.bosminer import BOSMinerX19
from miners.whatsminer.M20 import BTMinerM20
from miners.whatsminer.M21 import BTMinerM21
from miners.whatsminer.M30 import BTMinerM30
from miners.whatsminer.M31 import BTMinerM31
from miners.whatsminer.M32 import BTMinerM32
from miners.avalonminer.Avalon8 import CGMinerAvalon8
from miners.avalonminer.Avalon10 import CGMinerAvalon10
from miners.cgminer import CGMiner
from miners.bmminer import BMMiner
from miners.bosminer import BOSMiner
from miners.unknown import UnknownMiner
from API import APIError
import asyncio
import ipaddress
import json
import logging
from settings import MINER_FACTORY_GET_VERSION_RETRIES as GET_VERSION_RETRIES
class MinerFactory:
_instance = None
def __init__(self):
self.miners = {}
def __new__(cls):
if not cls._instance:
cls._instance = super(MinerFactory, cls).__new__(cls)
return cls._instance
async def get_miner_generator(self, ips: list):
"""
Get Miner objects from ip addresses using an async generator.
Returns an asynchronous generator containing Miners.
Parameters:
ips: a list of ip addresses to get miners for.
"""
# get the event loop
loop = asyncio.get_event_loop()
# create a list of tasks
scan_tasks = []
# for each miner IP that was passed in, add a task to get its class
for miner in ips:
scan_tasks.append(loop.create_task(self.get_miner(miner)))
# asynchronously run the tasks and return them as they complete
scanned = asyncio.as_completed(scan_tasks)
# loop through and yield the miners as they complete
for miner in scanned:
yield await miner
async def get_miner(self, ip: ipaddress.ip_address):
"""Decide a miner type using the IP address of the miner."""
# check if the miner already exists in cache
if ip in self.miners:
return self.miners[ip]
# if everything fails, the miner is already set to unknown
miner = UnknownMiner(str(ip))
api = None
model = None
# try to get the API multiple times based on retries
for i in range(GET_VERSION_RETRIES):
# get the API type, should be BOSMiner, CGMiner, BMMiner, BTMiner, or None
api = await self._get_api_type(ip)
# if we find the API type, dont need to loop anymore
if api:
break
# try to get the model multiple times based on retries
for i in range(GET_VERSION_RETRIES):
# get the model, should return some miner model type, e.g. Antminer S9
model = await self._get_miner_model(ip)
# if we find the model type, dont need to loop anymore
if model:
break
# make sure we have model information
if model:
# check if the miner is an Antminer
if "Antminer" in model:
# S9 logic
if "Antminer S9" in model:
# handle the different API types
if not api:
logging.warning(f"{str(ip)}: No API data found, using BraiinsOS.")
miner = BOSMinerS9(str(ip))
elif "BOSMiner" in api:
miner = BOSMinerS9(str(ip))
elif "CGMiner" in api:
miner = CGMinerS9(str(ip))
elif "BMMiner" in api:
miner = BMMinerS9(str(ip))
elif "Antminer T9" in model:
if "BMMiner" in api:
if "Hiveon" in model:
# hiveOS, return T9 Hive
miner = HiveonT9(str(ip))
else:
miner = BMMinerT9(str(ip))
elif "CGMiner" in api:
miner = CGMinerT9(str(ip))
# X17 model logic
elif "17" in model:
# handle the different API types
if "BOSMiner" in api:
miner = BOSMinerX17(str(ip))
elif "CGMiner" in api:
miner = CGMinerX17(str(ip))
elif "BMMiner" in api:
miner = BMMinerX17(str(ip))
# X19 logic
elif "19" in model:
# handle the different API types
if "BOSMiner" in api:
miner = BOSMinerX19(str(ip))
if "CGMiner" in api:
miner = CGMinerX19(str(ip))
elif "BMMiner" in api:
miner = BMMinerX19(str(ip))
# Avalonminers
elif "avalon" in model:
if model == "avalon10":
miner = CGMinerAvalon10(str(ip))
else:
miner = CGMinerAvalon8(str(ip))
# Whatsminers
elif "M20" in model:
miner = BTMinerM20(str(ip))
elif "M21" in model:
miner = BTMinerM21(str(ip))
elif "M30" in model:
miner = BTMinerM30(str(ip))
elif "M31" in model:
miner = BTMinerM31(str(ip))
elif "M32" in model:
miner = BTMinerM32(str(ip))
# if we cant find a model, check if we found the API
else:
# return the miner base class with some API if we found it
if api:
if "BOSMiner" in api:
miner = BOSMiner(str(ip))
elif "CGMiner" in api:
miner = CGMiner(str(ip))
elif "BMMiner" in api:
miner = BMMiner(str(ip))
# save the miner to the cache at its IP
self.miners[ip] = miner
# return the miner
return miner
def clear_cached_miners(self):
"""Clear the miner factory cache."""
# empty out self.miners
self.miners = {}
async def _get_miner_model(self, ip: ipaddress.ip_address or str) -> str or None:
# instantiate model as being nothing if getting it fails
model = None
# try block in case of APIError or OSError 121 (Semaphore timeout)
try:
# send the devdetails command to the miner (will fail with no boards/devices)
data = await self._send_api_command(str(ip), "devdetails")
# sometimes data is b'', check for that
if data:
# status check, make sure the command succeeded
if data.get("STATUS"):
if not isinstance(data["STATUS"], str):
# if status is E, its an error
if data["STATUS"][0].get("STATUS") not in ["I", "S"]:
# try an alternate method if devdetails fails
data = await self._send_api_command(str(ip), "version")
# make sure we have data
if data:
# check the keys are there to get the version
if data.get("VERSION"):
if data["VERSION"][0].get("Type"):
# save the model to be returned later
model = data["VERSION"][0]["Type"]
else:
# make sure devdetails actually contains data, if its empty, there are no devices
if (
"DEVDETAILS" in data.keys()
and not data["DEVDETAILS"] == []
):
# check for model, for most miners
if not data["DEVDETAILS"][0]["Model"] == "":
# model of most miners
model = data["DEVDETAILS"][0]["Model"]
# if model fails, try driver
else:
# some avalonminers have model in driver
model = data["DEVDETAILS"][0]["Driver"]
else:
# if all that fails, try just version
data = await self._send_api_command(str(ip), "version")
if "VERSION" in data.keys():
model = data["VERSION"][0]["Type"]
else:
print(data)
return model
# if there are errors, we just return None
except APIError as e:
logging.debug(f"{str(ip)}: {e}")
except OSError as e:
logging.debug(f"{str(ip)}: {e}")
return model
async def _send_api_command(self, ip: ipaddress.ip_address or str, command: str):
try:
# get reader and writer streams
reader, writer = await asyncio.open_connection(str(ip), 4028)
except OSError as e:
logging.warning(f"{str(ip)} - Command {command}: {e}")
return {}
# create the command
cmd = {"command": command}
# send the command
writer.write(json.dumps(cmd).encode("utf-8"))
await writer.drain()
# instantiate data
data = b""
# loop to receive all the data
try:
while True:
d = await reader.read(4096)
if not d:
break
data += d
except Exception as e:
logging.debug(f"{str(ip)}: {e}")
try:
# some json from the API returns with a null byte (\x00) on the end
if data.endswith(b"\x00"):
# handle the null byte
str_data = data.decode("utf-8")[:-1]
else:
# no null byte
str_data = data.decode("utf-8")
# fix an error with a btminer return having an extra comma that breaks json.loads()
str_data = str_data.replace(",}", "}")
# fix an error with a btminer return having a newline that breaks json.loads()
str_data = str_data.replace("\n", "")
# fix an error with a bmminer return not having a specific comma that breaks json.loads()
str_data = str_data.replace("}{", "},{")
# parse the json
data = json.loads(str_data)
# handle bad json
except json.decoder.JSONDecodeError as e:
# raise APIError(f"Decode Error: {data}")
data = None
# close the connection
writer.close()
await writer.wait_closed()
return data
async def _get_api_type(self, ip: ipaddress.ip_address or str) -> dict or None:
"""Get data on the version of the miner to return the right miner."""
# instantiate API as None in case something fails
api = None
# try block to handle OSError 121 (Semaphore timeout)
try:
# try the version command,works on most miners
data = await self._send_api_command(str(ip), "version")
# if we got data back, try to parse it
if data:
# make sure the command succeeded
if data.get("STATUS") and not data.get("STATUS") == "E":
if data["STATUS"][0].get("STATUS") in ["I", "S"]:
# check if there are any BMMiner strings in any of the dict keys
if any(
"BMMiner" in string for string in data["VERSION"][0].keys()
):
api = "BMMiner"
# check if there are any CGMiner strings in any of the dict keys
elif any(
"CGMiner" in string for string in data["VERSION"][0].keys()
):
api = "CGMiner"
# check if there are any BOSMiner strings in any of the dict keys
elif any(
"BOSminer" in string for string in data["VERSION"][0].keys()
):
api = "BOSMiner"
# if all that fails, check the Description to see if it is a whatsminer
elif data.get("Description") and "whatsminer" in data.get(
"Description"
):
api = "BTMiner"
# return the API if we found it
if api:
return api
# if there are errors, return None
except OSError as e:
if e.winerror == 121:
return None
else:
logging.debug(f"{str(ip)}: {e}")
return None

View File

@@ -1,9 +0,0 @@
from miners.btminer import BTMiner
class BTMinerM20(BTMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
def __repr__(self) -> str:
return f"M20 - BTMiner: {str(self.ip)}"

View File

@@ -1,10 +0,0 @@
from miners.btminer import BTMiner
class BTMinerM21(BTMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.nominal_chips = [105, 66]
def __repr__(self) -> str:
return f"M21 - BTMiner: {str(self.ip)}"

View File

@@ -1,9 +0,0 @@
from miners.btminer import BTMiner
class BTMinerM30(BTMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
def __repr__(self) -> str:
return f"M30- BTMiner: {str(self.ip)}"

View File

@@ -1,10 +0,0 @@
from miners.btminer import BTMiner
class BTMinerM31(BTMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.nominal_chips = [78]
def __repr__(self) -> str:
return f"M31 - BTMiner: {str(self.ip)}"

View File

@@ -1,9 +0,0 @@
from miners.btminer import BTMiner
class BTMinerM32(BTMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
def __repr__(self) -> str:
return f"M32 - BTMiner: {str(self.ip)}"

View File

View File

@@ -1,31 +0,0 @@
import ipaddress
class MinerNetworkRange:
"""A MinerNetwork that takes a range of IP addresses.
:param ip_range: A range of IP addresses to put in the network.
Takes a string formatted as
{ip_range_1_start}-{ip_range_1_end}, {ip_range_2_start}-{ip_range_2_end}
"""
def __init__(self, ip_range: str):
ip_ranges = ip_range.replace(" ", "").split(",")
self.host_ips = []
for item in ip_ranges:
start, end = item.split("-")
start_ip = ipaddress.ip_address(start)
end_ip = ipaddress.ip_address(end)
networks = ipaddress.summarize_address_range(start_ip, end_ip)
for network in networks:
self.host_ips.append(network.network_address)
for host in network.hosts():
if host not in self.host_ips:
self.host_ips.append(host)
if network.broadcast_address not in self.host_ips:
self.host_ips.append(network.broadcast_address)
def hosts(self):
for x in self.host_ips:
yield x

396
poetry.lock generated Normal file
View File

@@ -0,0 +1,396 @@
[[package]]
name = "anyio"
version = "3.6.1"
description = "High level compatibility layer for multiple asynchronous event loop implementations"
category = "main"
optional = false
python-versions = ">=3.6.2"
[package.dependencies]
idna = ">=2.8"
sniffio = ">=1.1"
[package.extras]
doc = ["packaging", "sphinx-rtd-theme", "sphinx-autodoc-typehints (>=1.2.0)"]
test = ["coverage[toml] (>=4.5)", "hypothesis (>=4.0)", "pytest (>=7.0)", "pytest-mock (>=3.6.1)", "trustme", "contextlib2", "uvloop (<0.15)", "mock (>=4)", "uvloop (>=0.15)"]
trio = ["trio (>=0.16)"]
[[package]]
name = "asyncssh"
version = "2.11.0"
description = "AsyncSSH: Asynchronous SSHv2 client and server library"
category = "main"
optional = false
python-versions = ">= 3.6"
[package.dependencies]
cryptography = ">=3.1"
typing-extensions = ">=3.6"
[package.extras]
bcrypt = ["bcrypt (>=3.1.3)"]
fido2 = ["fido2 (>=0.9.2)"]
gssapi = ["gssapi (>=1.2.0)"]
libnacl = ["libnacl (>=1.4.2)"]
pkcs11 = ["python-pkcs11 (>=0.7.0)"]
pyopenssl = ["pyOpenSSL (>=17.0.0)"]
pywin32 = ["pywin32 (>=227)"]
[[package]]
name = "certifi"
version = "2022.6.15"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "cffi"
version = "1.15.1"
description = "Foreign Function Interface for Python calling C code."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "cryptography"
version = "37.0.4"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
cffi = ">=1.12"
[package.extras]
docs = ["sphinx (>=1.6.5,!=1.8.0,!=3.1.0,!=3.1.1)", "sphinx-rtd-theme"]
docstest = ["pyenchant (>=1.6.11)", "twine (>=1.12.0)", "sphinxcontrib-spelling (>=4.0.1)"]
pep8test = ["black", "flake8", "flake8-import-order", "pep8-naming"]
sdist = ["setuptools_rust (>=0.11.4)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["pytest (>=6.2.0)", "pytest-benchmark", "pytest-cov", "pytest-subtests", "pytest-xdist", "pretend", "iso8601", "pytz", "hypothesis (>=1.11.4,!=3.79.2)"]
[[package]]
name = "h11"
version = "0.12.0"
description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "httpcore"
version = "0.15.0"
description = "A minimal low-level HTTP client."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
anyio = ">=3.0.0,<4.0.0"
certifi = "*"
h11 = ">=0.11,<0.13"
sniffio = ">=1.0.0,<2.0.0"
[package.extras]
http2 = ["h2 (>=3,<5)"]
socks = ["socksio (>=1.0.0,<2.0.0)"]
[[package]]
name = "httpx"
version = "0.23.0"
description = "The next generation HTTP client."
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
certifi = "*"
httpcore = ">=0.15.0,<0.16.0"
rfc3986 = {version = ">=1.3,<2", extras = ["idna2008"]}
sniffio = "*"
[package.extras]
brotli = ["brotlicffi", "brotli"]
cli = ["click (>=8.0.0,<9.0.0)", "rich (>=10,<13)", "pygments (>=2.0.0,<3.0.0)"]
http2 = ["h2 (>=3,<5)"]
socks = ["socksio (>=1.0.0,<2.0.0)"]
[[package]]
name = "idna"
version = "3.3"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "passlib"
version = "1.7.4"
description = "comprehensive password hashing framework supporting over 30 schemes"
category = "main"
optional = false
python-versions = "*"
[package.extras]
argon2 = ["argon2-cffi (>=18.2.0)"]
bcrypt = ["bcrypt (>=3.1.0)"]
build_docs = ["sphinx (>=1.6)", "sphinxcontrib-fulltoc (>=1.2.0)", "cloud-sptheme (>=1.10.1)"]
totp = ["cryptography"]
[[package]]
name = "pyaml"
version = "21.10.1"
description = "PyYAML-based module to produce pretty and readable YAML-serialized data"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
PyYAML = "*"
[[package]]
name = "pycparser"
version = "2.21"
description = "C parser in Python"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pyyaml"
version = "6.0"
description = "YAML parser and emitter for Python"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "rfc3986"
version = "1.5.0"
description = "Validating URI References per RFC 3986"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
idna = {version = "*", optional = true, markers = "extra == \"idna2008\""}
[package.extras]
idna2008 = ["idna"]
[[package]]
name = "sniffio"
version = "1.2.0"
description = "Sniff out which async library your code is running under"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "toml"
version = "0.10.2"
description = "Python Library for Tom's Obvious, Minimal Language"
category = "main"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "typing-extensions"
version = "4.3.0"
description = "Backported and Experimental Type Hints for Python 3.7+"
category = "main"
optional = false
python-versions = ">=3.7"
[metadata]
lock-version = "1.1"
python-versions = "^3.9"
content-hash = "8d93eafd928d7fed4b0a00d13e46982c2d4310c37acb2faec7e7a477b3f35e9c"
[metadata.files]
anyio = [
{file = "anyio-3.6.1-py3-none-any.whl", hash = "sha256:cb29b9c70620506a9a8f87a309591713446953302d7d995344d0d7c6c0c9a7be"},
{file = "anyio-3.6.1.tar.gz", hash = "sha256:413adf95f93886e442aea925f3ee43baa5a765a64a0f52c6081894f9992fdd0b"},
]
asyncssh = [
{file = "asyncssh-2.11.0-py3-none-any.whl", hash = "sha256:7302348cbd54c58d3259da17f13e77912de1b005e366b15c8b183d948c8a91a8"},
{file = "asyncssh-2.11.0.tar.gz", hash = "sha256:59c36ce77ba9dda8dd57ad875776e7105ddb1fa851bc039bb3aeadeac4f67b56"},
]
certifi = [
{file = "certifi-2022.6.15-py3-none-any.whl", hash = "sha256:fe86415d55e84719d75f8b69414f6438ac3547d2078ab91b67e779ef69378412"},
{file = "certifi-2022.6.15.tar.gz", hash = "sha256:84c85a9078b11105f04f3036a9482ae10e4621616db313fe045dd24743a0820d"},
]
cffi = [
{file = "cffi-1.15.1-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:a66d3508133af6e8548451b25058d5812812ec3798c886bf38ed24a98216fab2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:470c103ae716238bbe698d67ad020e1db9d9dba34fa5a899b5e21577e6d52ed2"},
{file = "cffi-1.15.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:9ad5db27f9cabae298d151c85cf2bad1d359a1b9c686a275df03385758e2f914"},
{file = "cffi-1.15.1-cp27-cp27m-win32.whl", hash = "sha256:b3bbeb01c2b273cca1e1e0c5df57f12dce9a4dd331b4fa1635b8bec26350bde3"},
{file = "cffi-1.15.1-cp27-cp27m-win_amd64.whl", hash = "sha256:e00b098126fd45523dd056d2efba6c5a63b71ffe9f2bbe1a4fe1716e1d0c331e"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:d61f4695e6c866a23a21acab0509af1cdfd2c013cf256bbf5b6b5e2695827162"},
{file = "cffi-1.15.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:ed9cb427ba5504c1dc15ede7d516b84757c3e3d7868ccc85121d9310d27eed0b"},
{file = "cffi-1.15.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39d39875251ca8f612b6f33e6b1195af86d1b3e60086068be9cc053aa4376e21"},
{file = "cffi-1.15.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:285d29981935eb726a4399badae8f0ffdff4f5050eaa6d0cfc3f64b857b77185"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3eb6971dcff08619f8d91607cfc726518b6fa2a9eba42856be181c6d0d9515fd"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:21157295583fe8943475029ed5abdcf71eb3911894724e360acff1d61c1d54bc"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5635bd9cb9731e6d4a1132a498dd34f764034a8ce60cef4f5319c0541159392f"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2012c72d854c2d03e45d06ae57f40d78e5770d252f195b93f581acf3ba44496e"},
{file = "cffi-1.15.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd86c085fae2efd48ac91dd7ccffcfc0571387fe1193d33b6394db7ef31fe2a4"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:fa6693661a4c91757f4412306191b6dc88c1703f780c8234035eac011922bc01"},
{file = "cffi-1.15.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:59c0b02d0a6c384d453fece7566d1c7e6b7bae4fc5874ef2ef46d56776d61c9e"},
{file = "cffi-1.15.1-cp310-cp310-win32.whl", hash = "sha256:cba9d6b9a7d64d4bd46167096fc9d2f835e25d7e4c121fb2ddfc6528fb0413b2"},
{file = "cffi-1.15.1-cp310-cp310-win_amd64.whl", hash = "sha256:ce4bcc037df4fc5e3d184794f27bdaab018943698f4ca31630bc7f84a7b69c6d"},
{file = "cffi-1.15.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:3d08afd128ddaa624a48cf2b859afef385b720bb4b43df214f85616922e6a5ac"},
{file = "cffi-1.15.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:3799aecf2e17cf585d977b780ce79ff0dc9b78d799fc694221ce814c2c19db83"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a591fe9e525846e4d154205572a029f653ada1a78b93697f3b5a8f1f2bc055b9"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3548db281cd7d2561c9ad9984681c95f7b0e38881201e157833a2342c30d5e8c"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:91fc98adde3d7881af9b59ed0294046f3806221863722ba7d8d120c575314325"},
{file = "cffi-1.15.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:94411f22c3985acaec6f83c6df553f2dbe17b698cc7f8ae751ff2237d96b9e3c"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:03425bdae262c76aad70202debd780501fabeaca237cdfddc008987c0e0f59ef"},
{file = "cffi-1.15.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:cc4d65aeeaa04136a12677d3dd0b1c0c94dc43abac5860ab33cceb42b801c1e8"},
{file = "cffi-1.15.1-cp311-cp311-win32.whl", hash = "sha256:a0f100c8912c114ff53e1202d0078b425bee3649ae34d7b070e9697f93c5d52d"},
{file = "cffi-1.15.1-cp311-cp311-win_amd64.whl", hash = "sha256:04ed324bda3cda42b9b695d51bb7d54b680b9719cfab04227cdd1e04e5de3104"},
{file = "cffi-1.15.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:50a74364d85fd319352182ef59c5c790484a336f6db772c1a9231f1c3ed0cbd7"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e263d77ee3dd201c3a142934a086a4450861778baaeeb45db4591ef65550b0a6"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:cec7d9412a9102bdc577382c3929b337320c4c4c4849f2c5cdd14d7368c5562d"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4289fc34b2f5316fbb762d75362931e351941fa95fa18789191b33fc4cf9504a"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:173379135477dc8cac4bc58f45db08ab45d228b3363adb7af79436135d028405"},
{file = "cffi-1.15.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6975a3fac6bc83c4a65c9f9fcab9e47019a11d3d2cf7f3c0d03431bf145a941e"},
{file = "cffi-1.15.1-cp36-cp36m-win32.whl", hash = "sha256:2470043b93ff09bf8fb1d46d1cb756ce6132c54826661a32d4e4d132e1977adf"},
{file = "cffi-1.15.1-cp36-cp36m-win_amd64.whl", hash = "sha256:30d78fbc8ebf9c92c9b7823ee18eb92f2e6ef79b45ac84db507f52fbe3ec4497"},
{file = "cffi-1.15.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:198caafb44239b60e252492445da556afafc7d1e3ab7a1fb3f0584ef6d742375"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:5ef34d190326c3b1f822a5b7a45f6c4535e2f47ed06fec77d3d799c450b2651e"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8102eaf27e1e448db915d08afa8b41d6c7ca7a04b7d73af6514df10a3e74bd82"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:5df2768244d19ab7f60546d0c7c63ce1581f7af8b5de3eb3004b9b6fc8a9f84b"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a8c4917bd7ad33e8eb21e9a5bbba979b49d9a97acb3a803092cbc1133e20343c"},
{file = "cffi-1.15.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2642fe3142e4cc4af0799748233ad6da94c62a8bec3a6648bf8ee68b1c7426"},
{file = "cffi-1.15.1-cp37-cp37m-win32.whl", hash = "sha256:e229a521186c75c8ad9490854fd8bbdd9a0c9aa3a524326b55be83b54d4e0ad9"},
{file = "cffi-1.15.1-cp37-cp37m-win_amd64.whl", hash = "sha256:a0b71b1b8fbf2b96e41c4d990244165e2c9be83d54962a9a1d118fd8657d2045"},
{file = "cffi-1.15.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:320dab6e7cb2eacdf0e658569d2575c4dad258c0fcc794f46215e1e39f90f2c3"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e74c6b51a9ed6589199c787bf5f9875612ca4a8a0785fb2d4a84429badaf22a"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a5c84c68147988265e60416b57fc83425a78058853509c1b0629c180094904a5"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b926aa83d1edb5aa5b427b4053dc420ec295a08e40911296b9eb1b6170f6cca"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:87c450779d0914f2861b8526e035c5e6da0a3199d8f1add1a665e1cbc6fc6d02"},
{file = "cffi-1.15.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f2c9f67e9821cad2e5f480bc8d83b8742896f1242dba247911072d4fa94c192"},
{file = "cffi-1.15.1-cp38-cp38-win32.whl", hash = "sha256:8b7ee99e510d7b66cdb6c593f21c043c248537a32e0bedf02e01e9553a172314"},
{file = "cffi-1.15.1-cp38-cp38-win_amd64.whl", hash = "sha256:00a9ed42e88df81ffae7a8ab6d9356b371399b91dbdf0c3cb1e84c03a13aceb5"},
{file = "cffi-1.15.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:54a2db7b78338edd780e7ef7f9f6c442500fb0d41a5a4ea24fff1c929d5af585"},
{file = "cffi-1.15.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:fcd131dd944808b5bdb38e6f5b53013c5aa4f334c5cad0c72742f6eba4b73db0"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:7473e861101c9e72452f9bf8acb984947aa1661a7704553a9f6e4baa5ba64415"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6c9a799e985904922a4d207a94eae35c78ebae90e128f0c4e521ce339396be9d"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3bcde07039e586f91b45c88f8583ea7cf7a0770df3a1649627bf598332cb6984"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:33ab79603146aace82c2427da5ca6e58f2b3f2fb5da893ceac0c42218a40be35"},
{file = "cffi-1.15.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5d598b938678ebf3c67377cdd45e09d431369c3b1a5b331058c338e201f12b27"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:db0fbb9c62743ce59a9ff687eb5f4afbe77e5e8403d6697f7446e5f609976f76"},
{file = "cffi-1.15.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:98d85c6a2bef81588d9227dde12db8a7f47f639f4a17c9ae08e773aa9c697bf3"},
{file = "cffi-1.15.1-cp39-cp39-win32.whl", hash = "sha256:40f4774f5a9d4f5e344f31a32b5096977b5d48560c5592e2f3d2c4374bd543ee"},
{file = "cffi-1.15.1-cp39-cp39-win_amd64.whl", hash = "sha256:70df4e3b545a17496c9b3f41f5115e69a4f2e77e94e1d2a8e1070bc0c38c8a3c"},
{file = "cffi-1.15.1.tar.gz", hash = "sha256:d400bfb9a37b1351253cb402671cea7e89bdecc294e8016a707f6d1d8ac934f9"},
]
cryptography = [
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_universal2.whl", hash = "sha256:549153378611c0cca1042f20fd9c5030d37a72f634c9326e225c9f666d472884"},
{file = "cryptography-37.0.4-cp36-abi3-macosx_10_10_x86_64.whl", hash = "sha256:a958c52505c8adf0d3822703078580d2c0456dd1d27fabfb6f76fe63d2971cd6"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f721d1885ecae9078c3f6bbe8a88bc0786b6e749bf32ccec1ef2b18929a05046"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:3d41b965b3380f10e4611dbae366f6dc3cefc7c9ac4e8842a806b9672ae9add5"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:80f49023dd13ba35f7c34072fa17f604d2f19bf0989f292cedf7ab5770b87a0b"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2dcb0b3b63afb6df7fd94ec6fbddac81b5492513f7b0436210d390c14d46ee8"},
{file = "cryptography-37.0.4-cp36-abi3-manylinux_2_24_x86_64.whl", hash = "sha256:b7f8dd0d4c1f21759695c05a5ec8536c12f31611541f8904083f3dc582604280"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:30788e070800fec9bbcf9faa71ea6d8068f5136f60029759fd8c3efec3c9dcb3"},
{file = "cryptography-37.0.4-cp36-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:190f82f3e87033821828f60787cfa42bff98404483577b591429ed99bed39d59"},
{file = "cryptography-37.0.4-cp36-abi3-win32.whl", hash = "sha256:b62439d7cd1222f3da897e9a9fe53bbf5c104fff4d60893ad1355d4c14a24157"},
{file = "cryptography-37.0.4-cp36-abi3-win_amd64.whl", hash = "sha256:f7a6de3e98771e183645181b3627e2563dcde3ce94a9e42a3f427d2255190327"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bc95ed67b6741b2607298f9ea4932ff157e570ef456ef7ff0ef4884a134cc4b"},
{file = "cryptography-37.0.4-pp37-pypy37_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:f8c0a6e9e1dd3eb0414ba320f85da6b0dcbd543126e30fcc546e7372a7fbf3b9"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:e007f052ed10cc316df59bc90fbb7ff7950d7e2919c9757fd42a2b8ecf8a5f67"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7bc997818309f56c0038a33b8da5c0bfbb3f1f067f315f9abd6fc07ad359398d"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:d204833f3c8a33bbe11eda63a54b1aad7aa7456ed769a982f21ec599ba5fa282"},
{file = "cryptography-37.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:75976c217f10d48a8b5a8de3d70c454c249e4b91851f6838a4e48b8f41eb71aa"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:7099a8d55cd49b737ffc99c17de504f2257e3787e02abe6d1a6d136574873441"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2be53f9f5505673eeda5f2736bea736c40f051a739bfae2f92d18aed1eb54596"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-manylinux_2_24_x86_64.whl", hash = "sha256:91ce48d35f4e3d3f1d83e29ef4a9267246e6a3be51864a5b7d2247d5086fa99a"},
{file = "cryptography-37.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:4c590ec31550a724ef893c50f9a97a0c14e9c851c85621c5650d699a7b88f7ab"},
{file = "cryptography-37.0.4.tar.gz", hash = "sha256:63f9c17c0e2474ccbebc9302ce2f07b55b3b3fcb211ded18a42d5764f5c10a82"},
]
h11 = [
{file = "h11-0.12.0-py3-none-any.whl", hash = "sha256:36a3cb8c0a032f56e2da7084577878a035d3b61d104230d4bd49c0c6b555a9c6"},
{file = "h11-0.12.0.tar.gz", hash = "sha256:47222cb6067e4a307d535814917cd98fd0a57b6788ce715755fa2b6c28b56042"},
]
httpcore = [
{file = "httpcore-0.15.0-py3-none-any.whl", hash = "sha256:1105b8b73c025f23ff7c36468e4432226cbb959176eab66864b8e31c4ee27fa6"},
{file = "httpcore-0.15.0.tar.gz", hash = "sha256:18b68ab86a3ccf3e7dc0f43598eaddcf472b602aba29f9aa6ab85fe2ada3980b"},
]
httpx = [
{file = "httpx-0.23.0-py3-none-any.whl", hash = "sha256:42974f577483e1e932c3cdc3cd2303e883cbfba17fe228b0f63589764d7b9c4b"},
{file = "httpx-0.23.0.tar.gz", hash = "sha256:f28eac771ec9eb4866d3fb4ab65abd42d38c424739e80c08d8d20570de60b0ef"},
]
idna = [
{file = "idna-3.3-py3-none-any.whl", hash = "sha256:84d9dd047ffa80596e0f246e2eab0b391788b0503584e8945f2368256d2735ff"},
{file = "idna-3.3.tar.gz", hash = "sha256:9d643ff0a55b762d5cdb124b8eaa99c66322e2157b69160bc32796e824360e6d"},
]
passlib = [
{file = "passlib-1.7.4-py2.py3-none-any.whl", hash = "sha256:aa6bca462b8d8bda89c70b382f0c298a20b5560af6cbfa2dce410c0a2fb669f1"},
{file = "passlib-1.7.4.tar.gz", hash = "sha256:defd50f72b65c5402ab2c573830a6978e5f202ad0d984793c8dde2c4152ebe04"},
]
pyaml = [
{file = "pyaml-21.10.1-py2.py3-none-any.whl", hash = "sha256:19985ed303c3a985de4cf8fd329b6d0a5a5b5c9035ea240eccc709ebacbaf4a0"},
{file = "pyaml-21.10.1.tar.gz", hash = "sha256:c6519fee13bf06e3bb3f20cacdea8eba9140385a7c2546df5dbae4887f768383"},
]
pycparser = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
]
pyyaml = [
{file = "PyYAML-6.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d4db7c7aef085872ef65a8fd7d6d09a14ae91f691dec3e87ee5ee0539d516f53"},
{file = "PyYAML-6.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9df7ed3b3d2e0ecfe09e14741b857df43adb5a3ddadc919a2d94fbdf78fea53c"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:77f396e6ef4c73fdc33a9157446466f1cff553d979bd00ecb64385760c6babdc"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a80a78046a72361de73f8f395f1f1e49f956c6be882eed58505a15f3e430962b"},
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
{file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
{file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
{file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:98c4d36e99714e55cfbaaee6dd5badbc9a1ec339ebfc3b1f52e293aee6bb71a4"},
{file = "PyYAML-6.0-cp36-cp36m-win32.whl", hash = "sha256:0283c35a6a9fbf047493e3a0ce8d79ef5030852c51e9d911a27badfde0605293"},
{file = "PyYAML-6.0-cp36-cp36m-win_amd64.whl", hash = "sha256:07751360502caac1c067a8132d150cf3d61339af5691fe9e87803040dbc5db57"},
{file = "PyYAML-6.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:819b3830a1543db06c4d4b865e70ded25be52a2e0631ccd2f6a47a2822f2fd7c"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:473f9edb243cb1935ab5a084eb238d842fb8f404ed2193a915d1784b5a6b5fc0"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0ce82d761c532fe4ec3f87fc45688bdd3a4c1dc5e0b4a19814b9009a29baefd4"},
{file = "PyYAML-6.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:231710d57adfd809ef5d34183b8ed1eeae3f76459c18fb4a0b373ad56bedcdd9"},
{file = "PyYAML-6.0-cp37-cp37m-win32.whl", hash = "sha256:c5687b8d43cf58545ade1fe3e055f70eac7a5a1a0bf42824308d868289a95737"},
{file = "PyYAML-6.0-cp37-cp37m-win_amd64.whl", hash = "sha256:d15a181d1ecd0d4270dc32edb46f7cb7733c7c508857278d3d378d14d606db2d"},
{file = "PyYAML-6.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:0b4624f379dab24d3725ffde76559cff63d9ec94e1736b556dacdfebe5ab6d4b"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:213c60cd50106436cc818accf5baa1aba61c0189ff610f64f4a3e8c6726218ba"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:9fa600030013c4de8165339db93d182b9431076eb98eb40ee068700c9c813e34"},
{file = "PyYAML-6.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:277a0ef2981ca40581a47093e9e2d13b3f1fbbeffae064c1d21bfceba2030287"},
{file = "PyYAML-6.0-cp38-cp38-win32.whl", hash = "sha256:d4eccecf9adf6fbcc6861a38015c2a64f38b9d94838ac1810a9023a0609e1b78"},
{file = "PyYAML-6.0-cp38-cp38-win_amd64.whl", hash = "sha256:1e4747bc279b4f613a09eb64bba2ba602d8a6664c6ce6396a4d0cd413a50ce07"},
{file = "PyYAML-6.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:055d937d65826939cb044fc8c9b08889e8c743fdc6a32b33e2390f66013e449b"},
{file = "PyYAML-6.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:e61ceaab6f49fb8bdfaa0f92c4b57bcfbea54c09277b1b4f7ac376bfb7a7c174"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d67d839ede4ed1b28a4e8909735fc992a923cdb84e618544973d7dfc71540803"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cba8c411ef271aa037d7357a2bc8f9ee8b58b9965831d9e51baf703280dc73d3"},
{file = "PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:40527857252b61eacd1d9af500c3337ba8deb8fc298940291486c465c8b46ec0"},
{file = "PyYAML-6.0-cp39-cp39-win32.whl", hash = "sha256:b5b9eccad747aabaaffbc6064800670f0c297e52c12754eb1d976c57e4f74dcb"},
{file = "PyYAML-6.0-cp39-cp39-win_amd64.whl", hash = "sha256:b3d267842bf12586ba6c734f89d1f5b871df0273157918b0ccefa29deb05c21c"},
{file = "PyYAML-6.0.tar.gz", hash = "sha256:68fb519c14306fec9720a2a5b45bc9f0c8d1b9c72adf45c37baedfcd949c35a2"},
]
rfc3986 = [
{file = "rfc3986-1.5.0-py2.py3-none-any.whl", hash = "sha256:a86d6e1f5b1dc238b218b012df0aa79409667bb209e58da56d0b94704e712a97"},
{file = "rfc3986-1.5.0.tar.gz", hash = "sha256:270aaf10d87d0d4e095063c65bf3ddbc6ee3d0b226328ce21e036f946e421835"},
]
sniffio = [
{file = "sniffio-1.2.0-py3-none-any.whl", hash = "sha256:471b71698eac1c2112a40ce2752bb2f4a4814c22a54a3eed3676bc0f5ca9f663"},
{file = "sniffio-1.2.0.tar.gz", hash = "sha256:c4666eecec1d3f50960c6bdf61ab7bc350648da6c126e3cf6898d8cd4ddcd3de"},
]
toml = [
{file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
{file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
]
typing-extensions = [
{file = "typing_extensions-4.3.0-py3-none-any.whl", hash = "sha256:25642c956049920a5aa49edcdd6ab1e06d7e5d467fc00e0506c44ac86fbfca02"},
{file = "typing_extensions-4.3.0.tar.gz", hash = "sha256:e6d2677a32f47fc7eb2795db1dd15c1f34eff616bcaf2cfb5e997f854fa1c4a6"},
]

View File

@@ -59,7 +59,9 @@ class BaseMinerAPI:
]
]
async def multicommand(self, *commands: str) -> dict:
async def multicommand(
self, *commands: str, ignore_x19_error: bool = False
) -> dict:
"""Creates and sends multiple commands as one command to the miner."""
logging.debug(f"{self.ip}: Sending multicommand: {[*commands]}")
# split the commands into a proper list
@@ -78,8 +80,8 @@ If you are sure you want to use this command please use API.send_command("{item}
command = "+".join(commands)
data = None
try:
data = await self.send_command(command)
except APIError as e:
data = await self.send_command(command, x19_command=ignore_x19_error)
except APIError:
try:
data = {}
# S19 handler, try again
@@ -96,9 +98,10 @@ If you are sure you want to use this command please use API.send_command("{item}
async def send_command(
self,
command: str,
command: str or bytes,
parameters: str or int or bool = None,
ignore_errors: bool = False,
x19_command: bool = False,
) -> dict:
"""Send an API command to the miner and return the result."""
try:
@@ -143,7 +146,8 @@ If you are sure you want to use this command please use API.send_command("{item}
# validate the command succeeded
validation = self.validate_command_output(data)
if not validation[0]:
logging.warning(f"{self.ip}: API Command Error: {validation[1]}")
if not x19_command:
logging.warning(f"{self.ip}: API Command Error: {validation[1]}")
raise APIError(validation[1])
return data
@@ -167,7 +171,10 @@ If you are sure you want to use this command please use API.send_command("{item}
return False, data["Msg"]
else:
# make sure the command succeeded
if data["STATUS"][0]["STATUS"] not in ("S", "I"):
if type(data["STATUS"]) == str:
if data["STATUS"] in ["RESTART"]:
return True, None
elif data["STATUS"][0]["STATUS"] not in ("S", "I"):
# this is an error
if data["STATUS"][0]["STATUS"] not in ("S", "I"):
return False, data["STATUS"][0]["Msg"]
@@ -193,6 +200,12 @@ If you are sure you want to use this command please use API.send_command("{item}
str_data = str_data.replace("}{", "},{")
# fix an error with a bmminer return having a specific comma that breaks json.loads()
str_data = str_data.replace("[,{", "[{")
# fix an error with Avalonminers returning inf and nan
str_data = str_data.replace("inf", "0")
str_data = str_data.replace("nan", "0")
# fix whatever this garbage from avalonminers is `,"id":1}`
if str_data.startswith(","):
str_data = f"{{{str_data[1:]}"
# parse the json
parsed_data = json.loads(str_data)
# handle bad json

View File

@@ -1,4 +1,4 @@
from API import BaseMinerAPI
from pyasic.API import BaseMinerAPI
class BMMinerAPI(BaseMinerAPI):
@@ -126,7 +126,7 @@ class BMMinerAPI(BaseMinerAPI):
:return: A confirmation of adding the pool.
"""
return await self.send_command(
"addpool", parameters=f"{url}, " f"{username}, " f"{password}"
"addpool", parameters=f"{url},{username},{password}"
)
async def poolpriority(self, *n: int) -> dict:
@@ -147,7 +147,7 @@ class BMMinerAPI(BaseMinerAPI):
:return: A confirmation of setting pool quota.
"""
return await self.send_command("poolquota", parameters=f"{n}, " f"{q}")
return await self.send_command("poolquota", parameters=f"{n},{q}")
async def disablepool(self, n: int) -> dict:
"""Disable a pool.
@@ -326,7 +326,7 @@ class BMMinerAPI(BaseMinerAPI):
:return: The results of setting config of name to n.
"""
return await self.send_command("setconfig", parameters=f"{name}, " f"{n}")
return await self.send_command("setconfig", parameters=f"{name},{n}")
async def usbstats(self) -> dict:
"""Get stats of all USB devices except ztex.
@@ -354,11 +354,9 @@ class BMMinerAPI(BaseMinerAPI):
:return: Confirmation of setting PGA n with opt[,val].
"""
if val:
return await self.send_command(
"pgaset", parameters=f"{n}, " f"{opt}, " f"{val}"
)
return await self.send_command("pgaset", parameters=f"{n},{opt},{val}")
else:
return await self.send_command("pgaset", parameters=f"{n}, " f"{opt}")
return await self.send_command("pgaset", parameters=f"{n},{opt}")
async def zero(self, which: str, summary: bool) -> dict:
"""Zero a device.
@@ -373,7 +371,7 @@ class BMMinerAPI(BaseMinerAPI):
:return: the STATUS section with info on the zero and optional
summary.
"""
return await self.send_command("zero", parameters=f"{which}, {summary}")
return await self.send_command("zero", parameters=f"{which},{summary}")
async def hotplug(self, n: int) -> dict:
"""Enable hotplug.
@@ -474,9 +472,9 @@ class BMMinerAPI(BaseMinerAPI):
:return: Confirmation of setting option opt to value val.
"""
if val:
return await self.send_command("ascset", parameters=f"{n}, {opt}, {val}")
return await self.send_command("ascset", parameters=f"{n},{opt},{val}")
else:
return await self.send_command("ascset", parameters=f"{n}, {opt}")
return await self.send_command("ascset", parameters=f"{n},{opt}")
async def lcd(self) -> dict:
"""Get a general all-in-one status summary of the miner.

View File

@@ -1,4 +1,4 @@
from API import BaseMinerAPI
from pyasic.API import BaseMinerAPI
class BOSMinerAPI(BaseMinerAPI):
@@ -158,7 +158,7 @@ class BOSMinerAPI(BaseMinerAPI):
async def addpool(self, url: str, username: str, password: str) -> dict:
# BOS has not implemented this yet, they will in the future
raise NotImplementedError
# return await self.send_command("addpool", parameters=f"{url}, {username}, {password}")
# return await self.send_command("addpool", parameters=f"{url},{username},{password}")
async def removepool(self, n: int) -> dict:
# BOS has not implemented this yet, they will in the future

View File

@@ -9,8 +9,8 @@ import logging
from passlib.handlers.md5_crypt import md5_crypt
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from API import BaseMinerAPI, APIError
from settings import WHATSMINER_PWD
from pyasic.API import BaseMinerAPI, APIError
from pyasic.settings import WHATSMINER_PWD
### IMPORTANT ###
@@ -163,6 +163,7 @@ class BTMinerAPI(BaseMinerAPI):
command: str or bytes,
parameters: str or int or bool = None,
ignore_errors: bool = False,
**kwargs,
) -> dict:
"""Send a command to the miner API.
@@ -434,7 +435,7 @@ class BTMinerAPI(BaseMinerAPI):
enc_command = create_privileged_cmd(token_data, command)
return await self.send_command(enc_command)
async def update_firmware(self):
async def update_firmware(self): # noqa - static
# to be determined if this will be added later
# requires a file stream in bytes
return NotImplementedError

View File

@@ -1,4 +1,4 @@
from API import BaseMinerAPI
from pyasic.API import BaseMinerAPI
class CGMinerAPI(BaseMinerAPI):
@@ -122,7 +122,7 @@ class CGMinerAPI(BaseMinerAPI):
:return: A confirmation of adding the pool.
"""
return await self.send_command(
"addpool", parameters=f"{url}, " f"{username}, " f"{password}"
"addpool", parameters=f"{url},{username},{password}"
)
async def poolpriority(self, *n: int) -> dict:
@@ -143,7 +143,7 @@ class CGMinerAPI(BaseMinerAPI):
:return: A confirmation of setting pool quota.
"""
return await self.send_command("poolquota", parameters=f"{n}, " f"{q}")
return await self.send_command("poolquota", parameters=f"{n},{q}")
async def disablepool(self, n: int) -> dict:
"""Disable a pool.
@@ -322,7 +322,7 @@ class CGMinerAPI(BaseMinerAPI):
:return: The results of setting config of name to n.
"""
return await self.send_command("setconfig", parameters=f"{name}, " f"{n}")
return await self.send_command("setconfig", parameters=f"{name},{n}")
async def usbstats(self) -> dict:
"""Get stats of all USB devices except ztex.
@@ -350,11 +350,9 @@ class CGMinerAPI(BaseMinerAPI):
:return: Confirmation of setting PGA n with opt[,val].
"""
if val:
return await self.send_command(
"pgaset", parameters=f"{n}, " f"{opt}, " f"{val}"
)
return await self.send_command("pgaset", parameters=f"{n},{opt},{val}")
else:
return await self.send_command("pgaset", parameters=f"{n}, " f"{opt}")
return await self.send_command("pgaset", parameters=f"{n},{opt}")
async def zero(self, which: str, summary: bool) -> dict:
"""Zero a device.
@@ -369,7 +367,7 @@ class CGMinerAPI(BaseMinerAPI):
:return: the STATUS section with info on the zero and optional
summary.
"""
return await self.send_command("zero", parameters=f"{which}, " f"{summary}")
return await self.send_command("zero", parameters=f"{which},{summary}")
async def hotplug(self, n: int) -> dict:
"""Enable hotplug.
@@ -470,11 +468,9 @@ class CGMinerAPI(BaseMinerAPI):
:return: Confirmation of setting option opt to value val.
"""
if val:
return await self.send_command(
"ascset", parameters=f"{n}, " f"{opt}, " f"{val}"
)
return await self.send_command("ascset", parameters=f"{n},{opt},{val}")
else:
return await self.send_command("ascset", parameters=f"{n}, " f"{opt}")
return await self.send_command("ascset", parameters=f"{n},{opt}")
async def lcd(self) -> dict:
"""Get a general all-in-one status summary of the miner.

View File

@@ -1,4 +1,4 @@
from API import BaseMinerAPI
from pyasic.API import BaseMinerAPI
class UnknownAPI(BaseMinerAPI):
@@ -72,7 +72,7 @@ class UnknownAPI(BaseMinerAPI):
async def addpool(self, url: str, username: str, password: str) -> dict:
# BOS has not implemented this yet, they will in the future
raise NotImplementedError
# return await self.send_command("addpool", parameters=f"{url}, {username}, {password}")
# return await self.send_command("addpool", parameters=f"{url},{username},{password}")
async def removepool(self, n: int) -> dict:
# BOS has not implemented this yet, they will in the future

366
pyasic/config/__init__.py Normal file
View File

@@ -0,0 +1,366 @@
from dataclasses import dataclass, asdict
from typing import List, Literal
import random
import string
import toml
import yaml
import json
import time
@dataclass
class _Pool:
"""A dataclass for pool information.
:param url: URL of the pool.
:param username: Username on the pool.
:param password: Worker password on the pool.
"""
url: str = ""
username: str = ""
password: str = ""
def from_dict(self, data: dict):
"""Convert raw pool data as a dict to usable data and save it to this class.
:param data: The raw config data to convert.
"""
for key in data.keys():
if key == "url":
self.url = data[key]
if key in ["user", "username"]:
self.username = data[key]
if key in ["pass", "password"]:
self.password = data[key]
return self
def as_x19(self, user_suffix: str = None):
"""Convert the data in this class to a dict usable by an X19 device.
:param user_suffix: The suffix to append to username.
"""
username = self.username
if user_suffix:
username = f"{username}{user_suffix}"
pool = {"url": self.url, "user": username, "pass": self.password}
return pool
def as_avalon(self, user_suffix: str = None):
username = self.username
if user_suffix:
username = f"{username}{user_suffix}"
pool = ",".join([self.url, username, self.password])
return pool
def as_bos(self, user_suffix: str = None):
"""Convert the data in this class to a dict usable by an BOSMiner device.
:param user_suffix: The suffix to append to username.
"""
username = self.username
if user_suffix:
username = f"{username}{user_suffix}"
pool = {"url": self.url, "user": username, "password": self.password}
return pool
@dataclass
class _PoolGroup:
"""A dataclass for pool group information.
:param quota: The group quota.
:param group_name: The name of the pool group.
:param pools: A list of pools in this group.
"""
quota: int = 1
group_name: str = None
pools: List[_Pool] = None
def __post_init__(self):
if not self.group_name:
self.group_name = "".join(
random.choice(string.ascii_uppercase + string.digits) for _ in range(6)
) # generate random pool group name in case it isn't set
def from_dict(self, data: dict):
"""Convert raw pool group data as a dict to usable data and save it to this class.
:param data: The raw config data to convert.
"""
pools = []
for key in data.keys():
if key in ["name", "group_name"]:
self.group_name = data[key]
if key == "quota":
self.quota = data[key]
if key in ["pools", "pool"]:
for pool in data[key]:
pools.append(_Pool().from_dict(pool))
self.pools = pools
return self
def as_x19(self, user_suffix: str = None):
"""Convert the data in this class to a dict usable by an X19 device.
:param user_suffix: The suffix to append to username.
"""
pools = []
for pool in self.pools[:3]:
pools.append(pool.as_x19(user_suffix=user_suffix))
return pools
def as_avalon(self, user_suffix: str = None):
pool = self.pools[0].as_avalon(user_suffix=user_suffix)
return pool
def as_bos(self, user_suffix: str = None):
"""Convert the data in this class to a dict usable by an BOSMiner device.
:param user_suffix: The suffix to append to username.
"""
group = {
"name": self.group_name,
"quota": self.quota,
"pool": [pool.as_bos(user_suffix=user_suffix) for pool in self.pools],
}
return group
@dataclass
class MinerConfig:
"""A dataclass for miner configuration information.
:param pool_groups: A list of pool groups in this config.
:param temp_mode: The temperature control mode.
:param temp_target: The target temp.
:param temp_hot: The hot temp (100% fans).
:param temp_dangerous: The dangerous temp (shutdown).
:param minimum_fans: The minimum numbers of fans to run the miner.
:param fan_speed: Manual fan speed to run the fan at (only if temp_mode == "manual").
:param asicboost: Whether or not to enable asicboost.
:param autotuning_enabled: Whether or not to enable autotuning.
:param autotuning_wattage: The wattage to use when autotuning.
:param dps_enabled: Whether or not to enable dynamic power scaling.
:param dps_power_step: The amount of power to reduce autotuning by when the miner reaches dangerous temp.
:param dps_min_power: The minimum power to reduce autotuning to.
:param dps_shutdown_enabled: Whether or not to shutdown the miner when `dps_min_power` is reached.
:param dps_shutdown_duration: The amount of time to shutdown for (in hours).
"""
pool_groups: List[_PoolGroup] = None
temp_mode: Literal["auto", "manual", "disabled"] = "auto"
temp_target: float = 70.0
temp_hot: float = 80.0
temp_dangerous: float = 10.0
minimum_fans: int = None
fan_speed: Literal[tuple(range(101))] = None # noqa - Ignore weird Literal usage
asicboost: bool = None
autotuning_enabled: bool = True
autotuning_wattage: int = 900
dps_enabled: bool = None
dps_power_step: int = None
dps_min_power: int = None
dps_shutdown_enabled: bool = None
dps_shutdown_duration: float = None
def as_dict(self):
"""Convert the data in this class to a dict."""
data_dict = asdict(self)
for key in asdict(self).keys():
if data_dict[key] is None:
del data_dict[key]
return data_dict
def as_toml(self):
"""Convert the data in this class to toml."""
return toml.dumps(self.as_dict())
def as_yaml(self):
"""Convert the data in this class to yaml."""
return yaml.dump(self.as_dict(), sort_keys=False)
def from_raw(self, data: dict):
"""Convert raw config data as a dict to usable data and save it to this class.
:param data: The raw config data to convert.
"""
pool_groups = []
for key in data.keys():
if key == "pools":
pool_groups.append(_PoolGroup().from_dict({"pools": data[key]}))
elif key == "group":
for group in data[key]:
pool_groups.append(_PoolGroup().from_dict(group))
if key == "bitmain-fan-ctrl":
if data[key]:
self.temp_mode = "manual"
if data.get("bitmain-fan-pwm"):
self.fan_speed = int(data["bitmain-fan-pwm"])
elif key == "fan_control":
for _key in data[key].keys():
if _key == "min_fans":
self.minimum_fans = data[key][_key]
elif _key == "speed":
self.fan_speed = data[key][_key]
elif key == "temp_control":
for _key in data[key].keys():
if _key == "mode":
self.temp_mode = data[key][_key]
elif _key == "target_temp":
self.temp_target = data[key][_key]
elif _key == "hot_temp":
self.temp_hot = data[key][_key]
elif _key == "dangerous_temp":
self.temp_dangerous = data[key][_key]
if key == "hash_chain_global":
if data[key].get("asic_boost"):
self.asicboost = data[key]["asic_boost"]
if key == "autotuning":
for _key in data[key].keys():
if _key == "enabled":
self.autotuning_enabled = data[key][_key]
elif _key == "psu_power_limit":
self.autotuning_wattage = data[key][_key]
if key == "power_scaling":
for _key in data[key].keys():
if _key == "enabled":
self.dps_enabled = data[key][_key]
elif _key == "power_step":
self.dps_power_step = data[key][_key]
elif _key == "min_psu_power_limit":
self.dps_min_power = data[key][_key]
elif _key == "shutdown_enabled":
self.dps_shutdown_enabled = data[key][_key]
elif _key == "shutdown_duration":
self.dps_shutdown_duration = data[key][_key]
self.pool_groups = pool_groups
return self
def from_dict(self, data: dict):
"""Convert an output dict of this class back into usable data and save it to this class.
:param data: The raw config data to convert.
"""
pool_groups = []
for group in data["pool_groups"]:
pool_groups.append(_PoolGroup().from_dict(group))
for key in data.keys():
if getattr(self, key) and not key == "pool_groups":
setattr(self, key, data[key])
self.pool_groups = pool_groups
return self
def from_toml(self, data: str):
"""Convert output toml of this class back into usable data and save it to this class.
:param data: The raw config data to convert.
"""
return self.from_dict(toml.loads(data))
def from_yaml(self, data: str):
"""Convert output yaml of this class back into usable data and save it to this class.
:param data: The raw config data to convert.
"""
return self.from_dict(yaml.load(data, Loader=yaml.SafeLoader))
def as_x19(self, user_suffix: str = None) -> str:
"""Convert the data in this class to a config usable by an X19 device.
:param user_suffix: The suffix to append to username.
"""
cfg = {
"pools": self.pool_groups[0].as_x19(user_suffix=user_suffix),
"bitmain-fan-ctrl": False,
"bitmain-fan-pwn": 100,
}
if not self.temp_mode == "auto":
cfg["bitmain-fan-ctrl"] = True
if self.fan_speed:
cfg["bitmain-fan-ctrl"] = str(self.fan_speed)
return json.dumps(cfg)
def as_avalon(self, user_suffix: str = None) -> str:
cfg = self.pool_groups[0].as_avalon()
return cfg
def as_bos(self, model: str = "S9", user_suffix: str = None) -> str:
"""Convert the data in this class to a config usable by an BOSMiner device.
:param model: The model of the miner to be used in the format portion of the config.
:param user_suffix: The suffix to append to username.
"""
cfg = {
"format": {
"version": "1.2+",
"model": f"Antminer {model}",
"generator": "Upstream Config Utility",
"timestamp": int(time.time()),
},
"group": [
group.as_bos(user_suffix=user_suffix) for group in self.pool_groups
],
"temp_control": {
"mode": self.temp_mode,
"target_temp": self.temp_target,
"hot_temp": self.temp_hot,
"dangerous_temp": self.temp_dangerous,
},
}
if self.autotuning_enabled or self.autotuning_wattage:
cfg["autotuning"] = {}
if self.autotuning_enabled:
cfg["autotuning"]["enabled"] = self.autotuning_enabled
if self.autotuning_wattage:
cfg["autotuning"]["psu_power_limit"] = self.autotuning_wattage
if self.asicboost:
cfg["hash_chain_global"] = {}
cfg["hash_chain_global"]["asic_boost"] = self.asicboost
if any(
[
getattr(self, item)
for item in [
"dps_enabled",
"dps_power_step",
"dps_min_power",
"dps_shutdown_enabled",
"dps_shutdown_duration",
]
]
):
cfg["power_scaling"] = {}
if self.dps_enabled:
cfg["power_scaling"]["enabled"] = self.dps_enabled
if self.dps_power_step:
cfg["power_scaling"]["power_step"] = self.dps_power_step
if self.dps_min_power:
cfg["power_scaling"]["min_psu_power_limit"] = self.dps_min_power
if self.dps_shutdown_enabled:
cfg["power_scaling"]["shutdown_enabled"] = self.dps_shutdown_enabled
if self.dps_shutdown_duration:
cfg["power_scaling"]["shutdown_duration"] = self.dps_shutdown_duration
return toml.dumps(cfg)

118
pyasic/data/__init__.py Normal file
View File

@@ -0,0 +1,118 @@
from dataclasses import dataclass, field, asdict
from datetime import datetime
@dataclass
class MinerData:
"""A Dataclass to standardize data returned from miners (specifically AnyMiner().get_data())
:param ip: The IP of the miner as a str.
:param datetime: The time and date this data was generated.
:param model: The model of the miner as a str.
:param hostname: The network hostname of the miner as a str.
:param hashrate: The hashrate of the miner in TH/s as a int.
:param left_board_temp: The temp of the left PCB as an int.
:param left_board_chip_temp: The temp of the left board chips as an int.
:param center_board_temp: The temp of the center PCB as an int.
:param center_board_chip_temp: The temp of the center board chips as an int.
:param right_board_temp: The temp of the right PCB as an int.
:param right_board_chip_temp: The temp of the right board chips as an int.
:param wattage: Current power draw of the miner as an int.
:param wattage_limit: Power limit of the miner as an int.
:param fan_1: The speed of the first fan as an int.
:param fan_2: The speed of the second fan as an int.
:param fan_3: The speed of the third fan as an int.
:param fan_4: The speed of the fourth fan as an int.
:param left_chips: The number of chips online in the left board as an int.
:param center_chips: The number of chips online in the left board as an int.
:param right_chips: The number of chips online in the left board as an int.
:param ideal_chips: The ideal number of chips in the miner as an int.
:param pool_split: The pool split as a str.
:param pool_1_url: The first pool url on the miner as a str.
:param pool_1_user: The first pool user on the miner as a str.
:param pool_2_url: The second pool url on the miner as a str.
:param pool_2_user: The second pool user on the miner as a str.
"""
ip: str
datetime: datetime = None
mac: str = "00:00:00:00:00:00"
model: str = "Unknown"
hostname: str = "Unknown"
hashrate: float = 0
temperature_avg: int = field(init=False)
env_temp: float = 0
left_board_temp: int = 0
left_board_chip_temp: int = 0
center_board_temp: int = 0
center_board_chip_temp: int = 0
right_board_temp: int = 0
right_board_chip_temp: int = 0
wattage: int = 0
wattage_limit: int = 0
fan_1: int = -1
fan_2: int = -1
fan_3: int = -1
fan_4: int = -1
left_chips: int = 0
center_chips: int = 0
right_chips: int = 0
total_chips: int = field(init=False)
ideal_chips: int = 1
percent_ideal: float = field(init=False)
nominal: int = field(init=False)
pool_split: str = "0"
pool_1_url: str = "Unknown"
pool_1_user: str = "Unknown"
pool_2_url: str = ""
pool_2_user: str = ""
def __post_init__(self):
self.datetime = datetime.now()
@property
def total_chips(self): # noqa - Skip PyCharm inspection
return self.right_chips + self.center_chips + self.left_chips
@total_chips.setter
def total_chips(self, val):
pass
@property
def nominal(self): # noqa - Skip PyCharm inspection
return self.ideal_chips == self.total_chips
@nominal.setter
def nominal(self, val):
pass
@property
def percent_ideal(self): # noqa - Skip PyCharm inspection
return round((self.total_chips / self.ideal_chips) * 100)
@percent_ideal.setter
def percent_ideal(self, val):
pass
@property
def temperature_avg(self): # noqa - Skip PyCharm inspection
total_temp = 0
temp_count = 0
for temp in [
self.left_board_chip_temp,
self.center_board_chip_temp,
self.right_board_chip_temp,
]:
if temp and not temp == 0:
total_temp += temp
temp_count += 1
if not temp_count > 0:
return 0
return round(total_temp / temp_count)
@temperature_avg.setter
def temperature_avg(self, val):
pass
def asdict(self):
return asdict(self)

31
pyasic/logger/__init__.py Normal file
View File

@@ -0,0 +1,31 @@
import logging
from pyasic.settings import DEBUG, LOGFILE
def init_logger():
if LOGFILE:
logging.basicConfig(
filename="logfile.txt",
filemode="a",
format="%(pathname)s:%(lineno)d in %(funcName)s\n[%(levelname)s][%(asctime)s](%(name)s) - %(message)s",
datefmt="%x %X",
)
else:
logging.basicConfig(
format="%(pathname)s:%(lineno)d in %(funcName)s\n[%(levelname)s][%(asctime)s](%(name)s) - %(message)s",
datefmt="%x %X",
)
_logger = logging.getLogger()
if DEBUG:
_logger.setLevel(logging.DEBUG)
logging.getLogger("asyncssh").setLevel(logging.DEBUG)
else:
_logger.setLevel(logging.WARNING)
logging.getLogger("asyncssh").setLevel(logging.WARNING)
return _logger
logger = init_logger()

View File

@@ -1,27 +1,36 @@
from API.bmminer import BMMinerAPI
from API.bosminer import BOSMinerAPI
from API.cgminer import CGMinerAPI
from API.btminer import BTMinerAPI
from API.unknown import UnknownAPI
import ipaddress
import asyncssh
import logging
import ipaddress
from pyasic.data import MinerData
class BaseMiner:
def __init__(
self,
ip: str,
api: BMMinerAPI or BOSMinerAPI or CGMinerAPI or BTMinerAPI or UnknownAPI,
) -> None:
self.ip = ipaddress.ip_address(ip)
self.uname = None
self.pwd = None
self.api = api
def __init__(self, *args) -> None:
self.ip = None
self.uname = "root"
self.pwd = "admin"
self.api = None
self.api_type = None
self.model = None
self.light = None
self.hostname = None
self.nominal_chips = 1
self.version = None
self.fan_count = 2
self.config = None
def __repr__(self):
return f"{'' if not self.api_type else self.api_type} {'' if not self.model else self.model}: {str(self.ip)}"
def __lt__(self, other):
return ipaddress.ip_address(self.ip) < ipaddress.ip_address(other.ip)
def __gt__(self, other):
return ipaddress.ip_address(self.ip) > ipaddress.ip_address(other.ip)
def __eq__(self, other):
return ipaddress.ip_address(self.ip) == ipaddress.ip_address(other.ip)
async def _get_ssh_connection(self) -> asyncssh.connect:
"""Create a new asyncssh connection"""
@@ -39,20 +48,26 @@ class BaseMiner:
conn = await asyncssh.connect(
str(self.ip),
known_hosts=None,
username="admin",
username="root",
password="admin",
server_host_key_algs=["ssh-rsa"],
)
return conn
except Exception as e:
logging.warning(f"{self} raised an exception: {e}")
# logging.warning(f"{self} raised an exception: {e}")
raise e
except OSError:
except OSError as e:
logging.warning(f"Connection refused: {self}")
return None
except Exception as e:
logging.warning(f"{self} raised an exception: {e}")
raise e
except Exception as e:
# logging.warning(f"{self} raised an exception: {e}")
raise e
async def fault_light_on(self) -> bool:
return False
async def fault_light_off(self) -> bool:
return False
async def send_file(self, src, dest):
async with (await self._get_ssh_connection()) as conn:
@@ -74,10 +89,16 @@ class BaseMiner:
return None
async def reboot(self):
return None
return False
async def restart_backend(self):
return False
async def send_config(self, *args, **kwargs):
return None
async def send_config(self, yaml_config):
async def get_mac(self):
return None
async def get_data(self) -> MinerData:
return MinerData(ip=str(self.ip))

View File

@@ -0,0 +1,5 @@
from .bmminer import BMMiner
from .bosminer import BOSMiner
from .btminer import BTMiner
from .cgminer import CGMiner
from .hiveon import Hiveon

View File

@@ -0,0 +1,264 @@
import ipaddress
import logging
from pyasic.API.bmminer import BMMinerAPI
from pyasic.miners import BaseMiner
from pyasic.data import MinerData
from pyasic.settings import MINER_FACTORY_GET_VERSION_RETRIES as DATA_RETRIES
class BMMiner(BaseMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.ip = ipaddress.ip_address(ip)
self.api = BMMinerAPI(ip)
self.api_type = "BMMiner"
self.uname = "root"
self.pwd = "admin"
async def get_model(self) -> str or None:
"""Get miner model.
:return: Miner model or None.
"""
# check if model is cached
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
# get devdetails data
version_data = await self.api.devdetails()
# if we get data back, parse it for model
if version_data:
# handle Antminer BMMiner as a base
self.model = version_data["DEVDETAILS"][0]["Model"].replace("Antminer ", "")
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
# if we don't get devdetails, log a failed attempt
logging.warning(f"Failed to get model for miner: {self}")
return None
async def get_hostname(self) -> str:
"""Get miner hostname.
:return: The hostname of the miner as a string or "?"
"""
if self.hostname:
return self.hostname
try:
# open an ssh connection
async with (await self._get_ssh_connection()) as conn:
# if we get the connection, check hostname
if conn is not None:
# get output of the hostname file
data = await conn.run("cat /proc/sys/kernel/hostname")
host = data.stdout.strip()
# return hostname data
logging.debug(f"Found hostname for {self.ip}: {host}")
self.hostname = host
return self.hostname
else:
# return ? if we fail to get hostname with no ssh connection
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
except Exception:
# return ? if we fail to get hostname with an exception
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
async def send_ssh_command(self, cmd: str) -> str or None:
"""Send a command to the miner over ssh.
:param cmd: The command to run.
:return: Result of the command or None.
"""
result = None
# open an ssh connection
async with (await self._get_ssh_connection()) as conn:
# 3 retries
for i in range(3):
try:
# run the command and get the result
result = await conn.run(cmd)
result = result.stdout
except Exception as e:
# if the command fails, log it
logging.warning(f"{self} command {cmd} error: {e}")
# on the 3rd retry, return None
if i == 3:
return
continue
# return the result, either command output or None
return result
async def get_config(self) -> list or None:
"""Get the pool configuration of the miner.
:return: Pool config data or None.
"""
# get pool data
pools = await self.api.pools()
pool_data = []
# ensure we got pool data
if not pools:
return
# parse all the pools
for pool in pools["POOLS"]:
pool_data.append({"url": pool["URL"], "user": pool["User"], "pwd": "123"})
return pool_data
async def reboot(self) -> bool:
logging.debug(f"{self}: Sending reboot command.")
_ret = await self.send_ssh_command("reboot")
logging.debug(f"{self}: Reboot command completed.")
if isinstance(_ret, str):
return True
return False
async def get_data(self) -> MinerData:
data = MinerData(ip=str(self.ip), ideal_chips=self.nominal_chips * 3)
board_offset = -1
fan_offset = -1
model = await self.get_model()
hostname = await self.get_hostname()
mac = await self.get_mac()
if model:
data.model = model
if hostname:
data.hostname = hostname
if mac:
data.mac = mac
miner_data = None
for i in range(DATA_RETRIES):
miner_data = await self.api.multicommand(
"summary", "pools", "stats", ignore_x19_error=True
)
if miner_data:
break
if not miner_data:
return data
summary = miner_data.get("summary")[0]
pools = miner_data.get("pools")[0]
stats = miner_data.get("stats")[0]
if summary:
hr = summary.get("SUMMARY")
if hr:
if len(hr) > 0:
hr = hr[0].get("GHS av")
if hr:
data.hashrate = round(hr / 1000, 2)
if stats:
boards = stats.get("STATS")
if boards:
if len(boards) > 0:
for board_num in range(1, 16, 5):
for _b_num in range(5):
b = boards[1].get(f"chain_acn{board_num + _b_num}")
if b and not b == 0 and board_offset == -1:
board_offset = board_num
if board_offset == -1:
board_offset = 1
data.left_chips = boards[1].get(f"chain_acn{board_offset}")
data.center_chips = boards[1].get(f"chain_acn{board_offset+1}")
data.right_chips = boards[1].get(f"chain_acn{board_offset+2}")
if stats:
temp = stats.get("STATS")
if temp:
if len(temp) > 1:
for fan_num in range(1, 8, 4):
for _f_num in range(4):
f = temp[1].get(f"fan{fan_num + _f_num}")
if f and not f == 0 and fan_offset == -1:
fan_offset = fan_num
if fan_offset == -1:
fan_offset = 1
for fan in range(self.fan_count):
setattr(
data, f"fan_{fan + 1}", temp[1].get(f"fan{fan_offset+fan}")
)
board_map = {0: "left_board", 1: "center_board", 2: "right_board"}
env_temp_list = []
for item in range(3):
board_temp = temp[1].get(f"temp{item + board_offset}")
chip_temp = temp[1].get(f"temp2_{item + board_offset}")
setattr(data, f"{board_map[item]}_chip_temp", chip_temp)
setattr(data, f"{board_map[item]}_temp", board_temp)
if f"temp_pcb{item}" in temp[1].keys():
env_temp = temp[1][f"temp_pcb{item}"].split("-")[0]
if not env_temp == 0:
env_temp_list.append(int(env_temp))
data.env_temp = sum(env_temp_list) / len(env_temp_list)
if pools:
pool_1 = None
pool_2 = None
pool_1_user = None
pool_2_user = None
pool_1_quota = 1
pool_2_quota = 1
quota = 0
for pool in pools.get("POOLS"):
if not pool_1_user:
pool_1_user = pool.get("User")
pool_1 = pool["URL"]
pool_1_quota = pool["Quota"]
elif not pool_2_user:
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if not pool.get("User") == pool_1_user:
if not pool_2_user == pool.get("User"):
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if pool_2_user and not pool_2_user == pool_1_user:
quota = f"{pool_1_quota}/{pool_2_quota}"
if pool_1:
pool_1 = pool_1.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_1_url = pool_1
if pool_1_user:
data.pool_1_user = pool_1_user
if pool_2:
pool_2 = pool_2.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_2_url = pool_2
if pool_2_user:
data.pool_2_user = pool_2_user
if quota:
data.pool_split = str(quota)
return data

View File

@@ -0,0 +1,406 @@
import ipaddress
import logging
import json
import toml
from pyasic.miners import BaseMiner
from pyasic.API.bosminer import BOSMinerAPI
from pyasic.API import APIError
from pyasic.data import MinerData
from pyasic.config import MinerConfig
from pyasic.settings import MINER_FACTORY_GET_VERSION_RETRIES as DATA_RETRIES
class BOSMiner(BaseMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.ip = ipaddress.ip_address(ip)
self.api = BOSMinerAPI(ip)
self.api_type = "BOSMiner"
self.uname = "root"
self.pwd = "admin"
self.config = None
async def send_ssh_command(self, cmd: str) -> str or None:
"""Send a command to the miner over ssh.
:return: Result of the command or None.
"""
result = None
# open an ssh connection
async with (await self._get_ssh_connection()) as conn:
# 3 retries
for i in range(3):
try:
# run the command and get the result
result = await conn.run(cmd)
result = result.stdout
except Exception as e:
# if the command fails, log it
logging.warning(f"{self} command {cmd} error: {e}")
# on the 3rd retry, return None
if i == 3:
return
continue
# return the result, either command output or None
return str(result)
async def fault_light_on(self) -> bool:
"""Sends command to turn on fault light on the miner."""
logging.debug(f"{self}: Sending fault_light on command.")
self.light = True
_ret = await self.send_ssh_command("miner fault_light on")
logging.debug(f"{self}: fault_light on command completed.")
if isinstance(_ret, str):
return True
return False
async def fault_light_off(self) -> bool:
"""Sends command to turn off fault light on the miner."""
logging.debug(f"{self}: Sending fault_light off command.")
self.light = False
_ret = await self.send_ssh_command("miner fault_light off")
logging.debug(f"{self}: fault_light off command completed.")
if isinstance(_ret, str):
return True
return False
async def restart_backend(self) -> bool:
return await self.restart_bosminer()
async def restart_bosminer(self) -> bool:
"""Restart bosminer hashing process."""
logging.debug(f"{self}: Sending bosminer restart command.")
_ret = await self.send_ssh_command("/etc/init.d/bosminer restart")
logging.debug(f"{self}: bosminer restart command completed.")
if isinstance(_ret, str):
return True
return False
async def reboot(self) -> bool:
"""Reboots power to the physical miner."""
logging.debug(f"{self}: Sending reboot command.")
_ret = await self.send_ssh_command("/sbin/reboot")
logging.debug(f"{self}: Reboot command completed.")
if isinstance(_ret, str):
return True
return False
async def get_config(self) -> None:
logging.debug(f"{self}: Getting config.")
async with (await self._get_ssh_connection()) as conn:
logging.debug(f"{self}: Opening SFTP connection.")
async with conn.start_sftp_client() as sftp:
logging.debug(f"{self}: Reading config file.")
async with sftp.open("/etc/bosminer.toml") as file:
toml_data = toml.loads(await file.read())
logging.debug(f"{self}: Converting config file.")
cfg = MinerConfig().from_raw(toml_data)
self.config = cfg
return self.config
async def get_hostname(self) -> str:
"""Get miner hostname.
:return: The hostname of the miner as a string or "?"
"""
if self.hostname:
return self.hostname
try:
async with (await self._get_ssh_connection()) as conn:
if conn is not None:
data = await conn.run("cat /proc/sys/kernel/hostname")
host = data.stdout.strip()
logging.debug(f"Found hostname for {self.ip}: {host}")
self.hostname = host
return self.hostname
else:
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
except Exception:
logging.warning(f"Failed to get hostname for miner: {self}")
return "?"
async def get_model(self) -> str or None:
"""Get miner model.
:return: Miner model or None.
"""
# check if model is cached
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model} (BOS)")
return self.model + " (BOS)"
# get devdetails data
try:
version_data = await self.api.devdetails()
except APIError as e:
version_data = None
if e.message == "Not ready":
cfg = json.loads(await self.send_ssh_command("bosminer config --data"))
model = cfg.get("data").get("format").get("model")
if model:
model = model.replace("Antminer ", "")
self.model = model
return self.model + " (BOS)"
# if we get data back, parse it for model
if version_data:
if not version_data["DEVDETAILS"] == []:
# handle Antminer BOSMiner as a base
self.model = version_data["DEVDETAILS"][0]["Model"].replace(
"Antminer ", ""
)
logging.debug(f"Found model for {self.ip}: {self.model} (BOS)")
return self.model + " (BOS)"
# if we don't get devdetails, log a failed attempt
logging.warning(f"Failed to get model for miner: {self}")
return None
async def get_version(self):
"""Get miner firmware version.
:return: Miner firmware version or None.
"""
# check if version is cached
if self.version:
logging.debug(f"Found version for {self.ip}: {self.version}")
return self.version
# get output of bos version file
version_data = await self.send_ssh_command("cat /etc/bos_version")
# if we get the version data, parse it
if version_data:
self.version = version_data.split("-")[5]
logging.debug(f"Found version for {self.ip}: {self.version}")
return self.version
# if we fail to get version, log a failed attempt
logging.warning(f"Failed to get model for miner: {self}")
return None
async def send_config(self, yaml_config, ip_user: bool = False) -> None:
"""Configures miner with yaml config."""
logging.debug(f"{self}: Sending config.")
if ip_user:
suffix = str(self.ip).split(".")[-1]
toml_conf = (
MinerConfig()
.from_yaml(yaml_config)
.as_bos(model=self.model.replace(" (BOS)", ""), user_suffix=suffix)
)
else:
toml_conf = (
MinerConfig()
.from_yaml(yaml_config)
.as_bos(model=self.model.replace(" (BOS)", ""))
)
async with (await self._get_ssh_connection()) as conn:
logging.debug(f"{self}: Opening SFTP connection.")
async with conn.start_sftp_client() as sftp:
logging.debug(f"{self}: Opening config file.")
async with sftp.open("/etc/bosminer.toml", "w+") as file:
await file.write(toml_conf)
logging.debug(f"{self}: Restarting BOSMiner")
await conn.run("/etc/init.d/bosminer restart")
async def get_board_info(self) -> dict:
"""Gets data on each board and chain in the miner."""
logging.debug(f"{self}: Getting board info.")
devdetails = await self.api.devdetails()
if not devdetails.get("DEVDETAILS"):
print("devdetails error", devdetails)
return {0: [], 1: [], 2: []}
devs = devdetails["DEVDETAILS"]
boards = {}
offset = devs[0]["ID"]
for board in devs:
boards[board["ID"] - offset] = []
if not board["Chips"] == self.nominal_chips:
nominal = False
else:
nominal = True
boards[board["ID"] - offset].append(
{
"chain": board["ID"] - offset,
"chip_count": board["Chips"],
"chip_status": "o" * board["Chips"],
"nominal": nominal,
}
)
logging.debug(f"Found board data for {self}: {boards}")
return boards
async def get_bad_boards(self) -> dict:
"""Checks for and provides list of non working boards."""
boards = await self.get_board_info()
bad_boards = {}
for board in boards.keys():
for chain in boards[board]:
if not chain["chip_count"] == 63:
if board not in bad_boards.keys():
bad_boards[board] = []
bad_boards[board].append(chain)
return bad_boards
async def check_good_boards(self) -> str:
"""Checks for and provides list for working boards."""
devs = await self.api.devdetails()
bad = 0
chains = devs["DEVDETAILS"]
for chain in chains:
if chain["Chips"] == 0:
bad += 1
if not bad > 0:
return str(self.ip)
async def get_data(self) -> MinerData:
data = MinerData(ip=str(self.ip), ideal_chips=self.nominal_chips * 3)
board_offset = -1
fan_offset = -1
model = await self.get_model()
hostname = await self.get_hostname()
mac = await self.get_mac()
if model:
data.model = model
if hostname:
data.hostname = hostname
if mac:
data.mac = mac
miner_data = None
for i in range(DATA_RETRIES):
try:
miner_data = await self.api.multicommand(
"summary", "temps", "tunerstatus", "pools", "devdetails", "fans"
)
except APIError as e:
if str(e.message) == "Not ready":
miner_data = await self.api.multicommand(
"summary", "tunerstatus", "pools", "fans"
)
if miner_data:
break
if not miner_data:
return data
summary = miner_data.get("summary")
temps = miner_data.get("temps")
tunerstatus = miner_data.get("tunerstatus")
pools = miner_data.get("pools")
devdetails = miner_data.get("devdetails")
fans = miner_data.get("fans")
if summary:
hr = summary[0].get("SUMMARY")
if hr:
if len(hr) > 0:
hr = hr[0].get("MHS 1m")
if hr:
data.hashrate = round(hr / 1000000, 2)
if temps:
temp = temps[0].get("TEMPS")
if temp:
if len(temp) > 0:
board_map = {0: "left_board", 1: "center_board", 2: "right_board"}
offset = 6 if temp[0]["ID"] in [6, 7, 8] else temp[0]["ID"]
for board in temp:
_id = board["ID"] - offset
chip_temp = round(board["Chip"])
board_temp = round(board["Board"])
setattr(data, f"{board_map[_id]}_chip_temp", chip_temp)
setattr(data, f"{board_map[_id]}_temp", board_temp)
if fans:
fan_data = fans[0].get("FANS")
if fan_data:
for fan in range(self.fan_count):
setattr(data, f"fan_{fan+1}", fan_data[fan]["RPM"])
if pools:
pool_1 = None
pool_2 = None
pool_1_user = None
pool_2_user = None
pool_1_quota = 1
pool_2_quota = 1
quota = 0
for pool in pools[0].get("POOLS"):
if not pool_1_user:
pool_1_user = pool.get("User")
pool_1 = pool["URL"]
pool_1_quota = pool["Quota"]
elif not pool_2_user:
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if not pool.get("User") == pool_1_user:
if not pool_2_user == pool.get("User"):
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if pool_2_user and not pool_2_user == pool_1_user:
quota = f"{pool_1_quota}/{pool_2_quota}"
if pool_1:
pool_1 = pool_1.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_1_url = pool_1
if pool_1_user:
data.pool_1_user = pool_1_user
if pool_2:
pool_2 = pool_2.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_2_url = pool_2
if pool_2_user:
data.pool_2_user = pool_2_user
if quota:
data.pool_split = str(quota)
if tunerstatus:
tuner = tunerstatus[0].get("TUNERSTATUS")
if tuner:
if len(tuner) > 0:
wattage = tuner[0].get("ApproximateMinerPowerConsumption")
wattage_limit = tuner[0].get("PowerLimit")
if wattage_limit:
data.wattage_limit = wattage_limit
if wattage:
data.wattage = wattage
if devdetails:
boards = devdetails[0].get("DEVDETAILS")
if boards:
if len(boards) > 0:
board_map = {0: "left_chips", 1: "center_chips", 2: "right_chips"}
offset = 6 if boards[0]["ID"] in [6, 7, 8] else boards[0]["ID"]
for board in boards:
_id = board["ID"] - offset
chips = board["Chips"]
setattr(data, board_map[_id], chips)
return data
async def get_mac(self):
result = await self.send_ssh_command("cat /sys/class/net/eth0/address")
return result.upper().strip()

View File

@@ -0,0 +1,50 @@
import logging
import ipaddress
from pyasic.API.bosminer import BOSMinerAPI
from pyasic.miners import BaseMiner
class BOSMinerOld(BaseMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.ip = ipaddress.ip_address(ip)
self.api = BOSMinerAPI(ip)
self.api_type = "BOSMiner"
self.uname = "root"
self.pwd = "admin"
async def send_ssh_command(self, cmd: str) -> str or None:
"""Send a command to the miner over ssh.
:return: Result of the command or None.
"""
result = None
# open an ssh connection
async with (await self._get_ssh_connection()) as conn:
# 3 retries
for i in range(3):
try:
# run the command and get the result
result = await conn.run(cmd)
if result.stdout:
result = result.stdout
except Exception as e:
if e == "SSH connection closed":
return "Update completed."
# if the command fails, log it
logging.warning(f"{self} command {cmd} error: {e}")
# on the 3rd retry, return None
if i == 3:
return
continue
# return the result, either command output or None
return str(result)
async def update_to_plus(self):
result = await self.send_ssh_command("opkg update && opkg install bos_plus")
return result

View File

@@ -0,0 +1,247 @@
import ipaddress
import logging
from pyasic.API.btminer import BTMinerAPI
from pyasic.miners import BaseMiner
from pyasic.API import APIError
from pyasic.data import MinerData
from pyasic.settings import MINER_FACTORY_GET_VERSION_RETRIES as DATA_RETRIES
class BTMiner(BaseMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.ip = ipaddress.ip_address(ip)
self.api = BTMinerAPI(ip)
self.api_type = "BTMiner"
async def get_model(self):
if self.model:
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
version_data = await self.api.devdetails()
if version_data:
self.model = version_data["DEVDETAILS"][0]["Model"].split("V")[0]
logging.debug(f"Found model for {self.ip}: {self.model}")
return self.model
logging.warning(f"Failed to get model for miner: {self}")
return None
async def get_hostname(self) -> str or None:
if self.hostname:
return self.hostname
try:
host_data = await self.api.get_miner_info()
if host_data:
host = host_data["Msg"]["hostname"]
logging.debug(f"Found hostname for {self.ip}: {host}")
self.hostname = host
return self.hostname
except APIError:
logging.info(f"Failed to get hostname for miner: {self}")
return None
except Exception:
logging.warning(f"Failed to get hostname for miner: {self}")
return None
async def get_board_info(self) -> dict:
"""Gets data on each board and chain in the miner."""
logging.debug(f"{self}: Getting board info.")
devs = await self.api.devs()
if not devs.get("DEVS"):
print("devs error", devs)
return {0: [], 1: [], 2: []}
devs = devs["DEVS"]
boards = {}
offset = devs[0]["ID"]
for board in devs:
boards[board["ID"] - offset] = []
if "Effective Chips" in board.keys():
if not board["Effective Chips"] in self.nominal_chips:
nominal = False
else:
nominal = True
boards[board["ID"] - offset].append(
{
"chain": board["ID"] - offset,
"chip_count": board["Effective Chips"],
"chip_status": "o" * board["Effective Chips"],
"nominal": nominal,
}
)
else:
logging.warning(f"Incorrect board data from {self}: {board}")
print(board)
logging.debug(f"Found board data for {self}: {boards}")
return boards
async def get_mac(self):
mac = ""
data = await self.api.summary()
if data:
if data.get("SUMMARY"):
if len(data["SUMMARY"]) > 0:
_mac = data["SUMMARY"][0].get("MAC")
if _mac:
mac = _mac
if mac == "":
try:
data = await self.api.get_miner_info()
if data:
if "Msg" in data.keys():
if "mac" in data["Msg"].keys():
mac = data["Msg"]["mac"]
except APIError:
pass
return str(mac).upper()
async def get_data(self):
data = MinerData(ip=str(self.ip), ideal_chips=self.nominal_chips * 3)
mac = None
try:
model = await self.get_model()
except APIError:
logging.info(f"Failed to get model: {self}")
model = None
data.model = "Whatsminer"
try:
hostname = await self.get_hostname()
except APIError:
logging.info(f"Failed to get hostname: {self}")
hostname = None
data.hostname = "Whatsminer"
if model:
data.model = model
if hostname:
data.hostname = hostname
miner_data = None
for i in range(DATA_RETRIES):
try:
miner_data = await self.api.multicommand("summary", "devs", "pools")
if miner_data:
break
except APIError:
pass
if not miner_data:
return data
summary = miner_data.get("summary")[0]
devs = miner_data.get("devs")[0]
pools = miner_data.get("pools")[0]
if summary:
summary_data = summary.get("SUMMARY")
if summary_data:
if len(summary_data) > 0:
if summary_data[0].get("MAC"):
mac = summary_data[0]["MAC"]
if summary_data[0].get("Env Temp"):
data.env_temp = summary_data[0]["Env Temp"]
data.fan_1 = summary_data[0]["Fan Speed In"]
data.fan_2 = summary_data[0]["Fan Speed Out"]
hr = summary_data[0].get("MHS 1m")
if hr:
data.hashrate = round(hr / 1000000, 2)
wattage = summary_data[0].get("Power")
if wattage:
data.wattage = round(wattage)
data.wattage_limit = round(wattage)
if devs:
temp_data = devs.get("DEVS")
if temp_data:
board_map = {0: "left_board", 1: "center_board", 2: "right_board"}
for board in temp_data:
_id = board["ASC"]
chip_temp = round(board["Chip Temp Avg"])
board_temp = round(board["Temperature"])
setattr(data, f"{board_map[_id]}_chip_temp", chip_temp)
setattr(data, f"{board_map[_id]}_temp", board_temp)
if devs:
boards = devs.get("DEVS")
if boards:
if len(boards) > 0:
board_map = {0: "left_chips", 1: "center_chips", 2: "right_chips"}
if "ID" in boards[0].keys():
id_key = "ID"
else:
id_key = "ASC"
offset = boards[0][id_key]
for board in boards:
_id = board[id_key] - offset
chips = board["Effective Chips"]
setattr(data, board_map[_id], chips)
if pools:
pool_1 = None
pool_2 = None
pool_1_user = None
pool_2_user = None
pool_1_quota = 1
pool_2_quota = 1
quota = 0
for pool in pools.get("POOLS"):
if not pool_1_user:
pool_1_user = pool.get("User")
pool_1 = pool["URL"]
pool_1_quota = pool["Quota"]
elif not pool_2_user:
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if not pool.get("User") == pool_1_user:
if not pool_2_user == pool.get("User"):
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if pool_2_user and not pool_2_user == pool_1_user:
quota = f"{pool_1_quota}/{pool_2_quota}"
if pool_1:
pool_1 = pool_1.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_1_url = pool_1
if pool_1_user:
data.pool_1_user = pool_1_user
if pool_2:
pool_2 = pool_2.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_2_url = pool_2
if pool_2_user:
data.pool_2_user = pool_2_user
if quota:
data.pool_split = str(quota)
if not mac:
try:
mac = await self.get_mac()
except APIError:
logging.info(f"Failed to get mac: {self}")
mac = None
if mac:
data.mac = mac
return data

View File

@@ -0,0 +1,210 @@
import ipaddress
import logging
from pyasic.API.cgminer import CGMinerAPI
from pyasic.miners import BaseMiner
from pyasic.API import APIError
from pyasic.data import MinerData
from pyasic.settings import MINER_FACTORY_GET_VERSION_RETRIES as DATA_RETRIES
class CGMiner(BaseMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.ip = ipaddress.ip_address(ip)
self.api = CGMinerAPI(ip)
self.api_type = "CGMiner"
self.uname = "root"
self.pwd = "admin"
self.config = None
async def get_model(self):
if self.model:
return self.model
try:
version_data = await self.api.devdetails()
except APIError:
return None
if version_data:
self.model = version_data["DEVDETAILS"][0]["Model"].replace("Antminer ", "")
return self.model
return None
async def get_hostname(self) -> str or None:
if self.hostname:
return self.hostname
try:
async with (await self._get_ssh_connection()) as conn:
if conn is not None:
data = await conn.run("cat /proc/sys/kernel/hostname")
host = data.stdout.strip()
self.hostname = host
return self.hostname
else:
return None
except Exception:
return None
async def send_ssh_command(self, cmd):
result = None
async with (await self._get_ssh_connection()) as conn:
for i in range(3):
try:
result = await conn.run(cmd)
result = result.stdout
except Exception as e:
print(f"{cmd} error: {e}")
if i == 3:
return
continue
return result
async def restart_backend(self) -> bool:
return await self.restart_cgminer()
async def restart_cgminer(self) -> bool:
commands = ["cgminer-api restart", "/usr/bin/cgminer-monitor >/dev/null 2>&1"]
commands = ";".join(commands)
_ret = await self.send_ssh_command(commands)
if isinstance(_ret, str):
return True
return False
async def reboot(self) -> bool:
logging.debug(f"{self}: Sending reboot command.")
_ret = await self.send_ssh_command("reboot")
logging.debug(f"{self}: Reboot command completed.")
if isinstance(_ret, str):
return True
return False
async def start_cgminer(self) -> None:
commands = [
"mkdir -p /etc/tmp/",
'echo "*/3 * * * * /usr/bin/cgminer-monitor" > /etc/tmp/root',
"crontab -u root /etc/tmp/root",
"/usr/bin/cgminer-monitor >/dev/null 2>&1",
]
commands = ";".join(commands)
await self.send_ssh_command(commands)
async def stop_cgminer(self) -> None:
commands = [
"mkdir -p /etc/tmp/",
'echo "" > /etc/tmp/root',
"crontab -u root /etc/tmp/root",
"killall cgminer",
]
commands = ";".join(commands)
await self.send_ssh_command(commands)
async def get_config(self) -> None:
async with (await self._get_ssh_connection()) as conn:
command = "cat /etc/config/cgminer"
result = await conn.run(command, check=True)
self.config = result.stdout
print(str(self.config))
async def get_data(self):
data = MinerData(ip=str(self.ip), ideal_chips=self.nominal_chips * 3)
model = await self.get_model()
hostname = await self.get_hostname()
mac = await self.get_mac()
if model:
data.model = model
if hostname:
data.hostname = hostname
if mac:
data.mac = mac
miner_data = None
for i in range(DATA_RETRIES):
miner_data = await self.api.multicommand("summary", "pools", "stats")
if miner_data:
break
if not miner_data:
return data
summary = miner_data.get("summary")[0]
pools = miner_data.get("pools")[0]
stats = miner_data.get("stats")[0]
if summary:
hr = summary.get("SUMMARY")
if hr:
if len(hr) > 0:
hr = hr[0].get("GHS 1m")
if hr:
data.hashrate = round(hr / 1000, 2)
if stats:
temp = stats.get("STATS")
if temp:
if len(temp) > 1:
data.fan_1 = temp[1].get("fan1")
data.fan_2 = temp[1].get("fan2")
data.fan_3 = temp[1].get("fan3")
data.fan_4 = temp[1].get("fan4")
board_map = {1: "left_board", 2: "center_board", 3: "right_board"}
for item in range(1, 4):
board_temp = temp[1].get(f"temp{item}")
chip_temp = temp[1].get(f"temp2_{item}")
setattr(data, f"{board_map[item]}_chip_temp", chip_temp)
setattr(data, f"{board_map[item]}_temp", board_temp)
if pools:
pool_1 = None
pool_2 = None
pool_1_user = None
pool_2_user = None
pool_1_quota = 1
pool_2_quota = 1
quota = 0
for pool in pools.get("POOLS"):
if not pool_1_user:
pool_1_user = pool.get("User")
pool_1 = pool["URL"]
pool_1_quota = pool["Quota"]
elif not pool_2_user:
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if not pool.get("User") == pool_1_user:
if not pool_2_user == pool.get("User"):
pool_2_user = pool.get("User")
pool_2 = pool["URL"]
pool_2_quota = pool["Quota"]
if pool_2_user and not pool_2_user == pool_1_user:
quota = f"{pool_1_quota}/{pool_2_quota}"
if pool_1:
pool_1 = pool_1.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_1_url = pool_1
if pool_1_user:
data.pool_1_user = pool_1_user
if pool_2:
pool_2 = pool_2.replace("stratum+tcp://", "").replace(
"stratum2+tcp://", ""
)
data.pool_2_url = pool_2
if pool_2_user:
data.pool_2_user = pool_2_user
if quota:
data.pool_split = str(quota)
return data

View File

@@ -1,14 +1,14 @@
from miners.bmminer import BMMiner
from pyasic.miners._backends import BMMiner
import ipaddress
class HiveonT9(BMMiner):
class Hiveon(BMMiner):
def __init__(self, ip: str) -> None:
super().__init__(ip)
self.model = "T9"
self.ip = ipaddress.ip_address(ip)
self.api_type = "Hiveon"
def __repr__(self) -> str:
return f"HiveonT9: {str(self.ip)}"
self.uname = "root"
self.pwd = "admin"
async def get_board_info(self) -> dict:
"""Gets data on each board and chain in the miner."""

View File

@@ -0,0 +1,3 @@
from .antminer import *
from .avalonminer import *
from .whatsminer import *

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S17(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S17"
self.nominal_chips = 48
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S17Plus(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S17+"
self.nominal_chips = 65
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S17Pro(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S17 Pro"
self.nominal_chips = 48
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S17e(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S17e"
self.nominal_chips = 135
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class T17(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "T17"
self.nominal_chips = 30
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class T17Plus(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "T17+"
self.nominal_chips = 44
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class T17e(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "T17e"
self.nominal_chips = 78
self.fan_count = 4

View File

@@ -0,0 +1,8 @@
from .S17 import S17
from .S17_Plus import S17Plus
from .S17_Pro import S17Pro
from .S17e import S17e
from .T17 import T17
from .T17_Plus import T17Plus
from .T17e import T17e

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S19(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S19"
self.nominal_chips = 76
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S19Pro(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S19 Pro"
self.nominal_chips = 114
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S19a(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S19a"
self.nominal_chips = 72
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S19j(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S19j"
self.nominal_chips = 114
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S19jPro(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S19j Pro"
self.nominal_chips = 126
self.fan_count = 4

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class T19(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "T19"
self.nominal_chips = 76
self.fan_count = 4

View File

@@ -0,0 +1,9 @@
from .S19 import S19
from .S19_Pro import S19Pro
from .S19j import S19j
from .S19j_Pro import S19jPro
from .S19a import S19a
from .T19 import T19

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S9(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S9"
self.nominal_chips = 63
self.fan_count = 2

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class S9i(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "S9i"
self.nominal_chips = 63
self.fan_count = 2

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class T9(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "T9"
self.nominal_chips = 57
self.fan_count = 2

View File

@@ -0,0 +1,3 @@
from .S9 import S9
from .S9i import S9i
from .T9 import T9

View File

@@ -0,0 +1,3 @@
from .X9 import *
from .X17 import *
from .X19 import *

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon1026(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 1026"
self.nominal_chips = 80
self.fan_count = 2

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon1047(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 1047"
self.nominal_chips = 80
self.fan_count = 2

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon1066(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 1066"
self.nominal_chips = 114
self.fan_count = 4

View File

@@ -0,0 +1,3 @@
from .A1026 import Avalon1026
from .A1047 import Avalon1047
from .A1066 import Avalon1066

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon721(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 721"
self.chip_count = 18 # This miner has 4 boards totaling 72
self.fan_count = 1 # also only 1 fan

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon741(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 741"
self.chip_count = 22 # This miner has 4 boards totaling 88
self.fan_count = 1 # also only 1 fan

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon761(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 761"
self.chip_count = 18 # This miner has 4 boards totaling 72
self.fan_count = 1 # also only 1 fan

View File

@@ -0,0 +1,3 @@
from .A721 import Avalon721
from .A741 import Avalon741
from .A761 import Avalon761

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon821(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 821"
self.chip_count = 26 # This miner has 4 boards totaling 104
self.fan_count = 1 # also only 1 fan

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon841(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 841"
self.chip_count = 26 # This miner has 4 boards totaling 104
self.fan_count = 1 # also only 1 fan

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon851(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 851"
self.chip_count = 26 # This miner has 4 boards totaling 104
self.fan_count = 1 # also only 1 fan

View File

@@ -0,0 +1,3 @@
from .A821 import Avalon821
from .A841 import Avalon841
from .A851 import Avalon851

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class Avalon921(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "Avalon 921"
self.chip_count = 26 # This miner has 4 boards totaling 104
self.fan_count = 1 # also only 1 fan

View File

@@ -0,0 +1 @@
from .A921 import Avalon921

View File

@@ -0,0 +1,4 @@
from .A7X import *
from .A8X import *
from .A9X import *
from .A10X import *

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class M20S(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "M20S"
self.nominal_chips = 66
self.fan_count = 2

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class M20SPlus(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "M20S+"
self.nominal_chips = 66
self.fan_count = 2

View File

@@ -0,0 +1,10 @@
from pyasic.miners import BaseMiner
class M21(BaseMiner):
def __init__(self, ip: str):
super().__init__()
self.ip = ip
self.model = "M21"
self.nominal_chips = 105
self.fan_count = 2

Some files were not shown because too many files have changed in this diff Show More